1. Consider the game of chess. If you are developing a chess playing agent, is the environment in question accessible? deterministic? static? There is a variant of Chess called "kriegspiel". Find out what it is and how its environment differs from that of chess (in our three dimensions).
2. Is it possible for an environment to be fully observable for one agent and only partially observable for another one?
3. Is it possible that an agent can model the same environment either as (a) partially observable but with deterministic actions or (b) fully observable but with stochastic actions? Philosophically, what does this tell us about the nature of "randomness ? (bonus points: see if you can make a connection to the way we understand eclipses now vs. the way there were understood by our ancestors).
4. Suppose you are trying to design an agent which has goals of achievement (i.e., it is judged based on whether or not it ended up in a state where the "goal" holds). Would you want the agent to be given "hard goals" (i.e., goals that must be satisfied) or "soft goals" (i.e., goals which, if satisfied, will give the agent a reward; but if skipped won't stop the agent from experiencing the rewards it gets from other goals). Focus on which is a harder "computational" problem.
(bonus: see if you can make a connection between this question and your high-school life vs. your university life..)
5. An environment is called "ergodic" if an agent in that environment can reach any state from any other state. Can you think of examples of ergodic vs. non-ergodic environments? Which are easier for an agent? (in particular, think of an agent which doesn't want to "think too much" and prefers to do some random action and see what happens. How does such an agent fare in an ergodic vs. non-ergodic environment?)
6. We talked about the fact that the agent needs to have a "model" of the environment--where the model tells it what are properties of a state in the environment, and how the environment "evolves" both when left to itself and when you do an action on it. A model is considered "complete" if it is fully faithful to the real environment. Do you think it is reasonable to expect models to have (a) complete models (b) no models or (c) partial models? Suppose an agent has a partial model, and according to its model, it should pick some action A at this point. Should it *always* pick A or should it once in a while "live a little" and pick something other than A? The question here has something to do with a fundamental tradeoff called "Exploration vs. Exploitation" tradeoff. See if you can relate this to the question of when you should make the decision about which area of computer science you should specialize in.
cheers
Rao