Friday, October 22, 2010

(Final) Blog questions on AI and Intelligent Agent Design..

1. Consider the game of chess. If you are developing a chess playing agent, is the environment in question accessible? deterministic? static?  There is a variant of Chess called "kriegspiel". Find out what it is and how its environment differs from that of chess (in our three dimensions). 

 2. Is it possible for an environment to be fully observable for one agent and only partially observable for another one?

3. Is it possible that an agent can model the same  environment either as (a) partially observable but with deterministic actions or (b) fully observable but with stochastic actions? Philosophically, what does this tell us about the nature of "randomness ? (bonus points: see if you can make a connection to the way we understand eclipses now vs. the way there were understood by our ancestors). 

4. Suppose you are trying to design an  agent which has goals of achievement (i.e., it is judged based on whether or not it ended up in a state where the "goal" holds). Would you want the agent to be given "hard goals" (i.e., goals that must be satisfied) or "soft goals" (i.e., goals which, if satisfied, will give the agent a reward; but if skipped won't stop the agent from experiencing the rewards it gets from other goals).  Focus on which is a harder "computational" problem.
(bonus: see if you can make a connection between this question and your high-school life vs. your university life..)

5. An environment is called "ergodic" if an agent in that environment can reach any state from any other state.   Can you think of examples of ergodic vs. non-ergodic environments? Which are easier for an agent? (in particular, think of an agent which doesn't want to "think too much" and prefers to do some random action and see what happens. How does such an agent fare in an ergodic vs. non-ergodic environment?)

6. We talked about the fact that the agent needs to have a "model" of the environment--where the model tells it what are properties of a state in the environment, and how the environment "evolves" both when left to itself and when you do an action on it.   A model is considered "complete" if it is fully faithful to the real environment. Do you think it is reasonable to expect models to have (a) complete models (b) no models or (c) partial models?    Suppose an agent has a partial model, and according to its model, it should pick some action A at this point. Should it *always* pick A or should it once in a while "live a little" and pick something other than A?    The question here has something to do with a fundamental tradeoff called "Exploration vs. Exploitation" tradeoff. See if you can relate this to the question of  when you should make the decision about which area of computer science you should specialize in. 



  1. 1. To the chess agent, the game would be Accessible (mostly), Non Deterministic (or Strategic) and Static. Accessible because the rules of the board and where the pieces are can be easily understood. You could suggest that if the agent were playing a human player then it isn't completely accessible because the agent couldn't read the player's face for example. Non Deterministic because the agent has no way of knowing beforehand, the moves of the opponent player. If you only consider the time during my agent's turn, then the world is static. No changes can occur to the board while it's my agent's turn. However if you include the entire game, then it isn't really static because the world will change during the opponent's turn. The environment of kriegspiel differs in several ways. Players can move more units at one time and more things need to be taken into consideration like unit strengths and ranges.

    2. Sure. In the example of driving a car. Each driver has a different perspective on the environment. So they each may be in a partially or fully observable state.

    3. Yes. (A) would be like today's view of an eclipse. We understand that we don't know everything about an eclipse, but we can tell with pretty good certainty what's going to happen. (b) would be like the old view of an eclipse. They felt like they had a complete understanding of how it worked, but didn't know when it would occur.

    4. I think that it really depends on what your goals are. Some goals would need to be hard, while some should be soft. The importance of the goal should really determine that. I would think that soft goals would be harder to prepare for. With a hard goal, you either have it or you don't. While with soft goals there is more ambiguity as to what means "success"

    5. An ergodic environment would be something like rolling a dice. Regardless of what the die was last roll, it could be any value next roll. An easier situation would be something like a number line where you can only move up or down by one digit (not ergodic). This would be much easier to determine the next state.

    6. Partial Models make the most sense. A full model in the real world would be really difficult to envision. Even humans don't have a full model of the world or environment. I think the answer to this question heavily depends on the purpose of your agent. Is it directing traffic? Then it should probably stick to always picking A (assuming a decent enough understanding of the environment). Is the agent directly interacting with humans? Then it should probably be a little more adventurous in it's actions. This can correlate to my major decision as well. At the beginning, a more explorative view would be pertinent, while as you progress, a more solid view would make sense.

  2. 1. The environment differs because in kriegspiel the environment is not fully observable the player cannot see the other player's chess pieces.
    2. Yes, because not just because one of the agents can observe the environment the other shouldn't be able.
    3. Yes, because you know what will happens because of your actions. B) Yes because you predict what might happen and there is a probability you might get it right.
    4. Giving your agent harder goals will be a more difficult computational & soft goals will be easier I'm not sure?

  3. 1. I think the chess game could be accessible. It's not hard to understand how to play but there are people that can't grasp the idea of it. I don't think it's deterministic. We can't tell what the opponent's moves are at any point in time. all we could really do is just guess. The game, as a whole, would change throughout the time the game lasts so i would consider it to be static. Kriegspiel definitely differs, in the form it's played at least. It would definitely be non-deterministic and considered static. I think it would also be partially accessible.

    2. Yes. It's kind of like a camera looking at a human. The human being the partially observable and the camera being the fully observable one.

    3. They both are possible. I don't think they're that great of an idea but they are possible.

    4. In a computational aspect, it would be easier to just make a agent with hard goals. It's a lot simpler to create than one that has soft goals, in my opinion at least. It would be harder for the agent with hard goals to accomplish its purpose, but computationally i think that would be a simpler agent to create.

    5. I think the person that created Meth could be considered ergodic. It was a matter, it might as well be at least, of "Let's add this and see what happens". Non-ergodic could be scientists trying to find a mixture to cure...the next big disease. They're not going to pick random chemicals and see if they can come out with a cure. They would make informed decisions based on what they know about the chemicals and proceed accordingly. Agents are better off as non-ergodic but it would be easier for them to be ergodic.

    6. I don't think it would be reasonable to expect models to be complete or have no model. Partial models are the most reasonable to expect. Wanting to live a little could be less favorable for us as college students. If we choose something that we know nothing about simply for the thrill of it, then there are chances of worries arising. Spending money and wasting time simply isn't my type of thing. There are different situations in which a agent should do what it is supposed to do or do whatever it wants. The better decision would have to depend on the situation.

  4. 1.Accessible-Yes (Steps or Rules to follow)
    Deterministic-No (Can’t foretell opponent’s move)
    Static-Yes (Board does not change between moves)
    The ‘kriegspiel’ environment changes constantly. different units can be moved, each unit has its own strengths and weaknesses, and each move in a certain way.

    2. Yes, at a swimming pool a life guard sits atop of the tower and sees all, but swimmers are only partially observant.

    3. (a) Now we know about eclipses, but also know there is more to learn and can estimate when they will happen
    (b) Our ancestors thought they knew everything about eclipses and could not estimate when an eclipse would appear.

    4. This is just like the comparison in class to the math homework and Goldilocks, which made all the sense in the world. So, I would try to make the goals somewhere in the middle of being hard and soft. Goals in high school seemed to be soft and homework did not need to be done, while homework in college seems to either be just right and help in class or to hard and do not help.

    5. Ergodic: Random Variables (rolling a dice, each roll cannot be determined)
    Non-Ergodic: Static (ticking of a clock, each tick can be calculated when it will occur)

    6. (a) Complete Models: Possibly the hardest, because environments are often changing and cannot be modeled completely.
    (b) No models: Not very many agents are incapable of learning and modeling its environment
    (c) Partial Models: (BEST) This is the most likely because it an agent with almost any sensor can learn its environment.
    Each action depending at what the agent is required to do depends on what would happen if it did not choose A. For example, a student not deciding to do homework is not as crucial as a driver decided not to stop for a red light. When choosing what computer science degree to specialize in, students should choose many different classes to see which is best suited for them.

  5. 1. It is a fully accessible, non-deterministic, quasi-static environment since the environment changes with the agent's actions and the opponents actions.

    Kriegspiel would be partially accessible since only its pieces can be seen. It is non-deterministic since the other player can make what ever moves they wish. It is still quasi-static since the opponent must make a move for the game to continue.

     2. Yes, imagine that a group of kids is playing tag in the dark and the person that is it gets night-vision goggles. The environment is fully observable by the person that is it, while the people not trying to be it can only sense their environment through touch and sound.

    3. For a partially observable but deterministic environment, an example would be how we can't map out the placement of every atom or even every object, but we know how objects will behave generally.

    For a fully observable, but stochastic environment, we can imagine that, on the quantum level, we may be able to see all the particles, but how they will behave is unpredictable.

    4. I would design the agent so that if a goal is skipped, it can still experience rewards for other goals. This is an easier problem since not every goal needs to be accomplished in order for a reward to be obtained. In high school, all goals must be achieved for a reward to be given (where the goals are to pass each class), while in College not all goals must be achieved in order to obtain a reward (some classes can be dropped).

    5. A ergodic environment could be one in which an agent needs to get from one spot to another on a plane. No matter where the agent is, it can easily get to the other spot, or “state.” An non-ergodic environment could be making bread – the sequence of steps cannot be changed or the end product or state will not be achieved. An ergodic environment would be easiest for an agent since it does not necessarily need to think of which state it currently is in.

    6. The most reasonable model would be a partial one. I don't think it should ALWAYS do the action since its model is only partial, so the action may not be needed at that particular instance.

  6. 1. it is accessible because you can program an agent to know the rules, it can be deterministic because you can predict your opponents move because if you put your pieces in danger your opponent will attack so you have to program your agent not to make such moves. It is static on your turn because the board does not change when its your turn until you move. Kigspiel chess would not be fully observable.

    2. Yes because the enviornment is observable through sensors and it can be observable by one agent depending on the sensors it uses and not observable by another if its sensors are aimed at observing a different enviornment.

    3. I think micheal got this one right

    4. Soft goals would be a lot harder to compute but hard goals would be alot harder to accomplish

    5. An ergodic enviornment would be an enviornment in which an agents state is determined by random forces such as Rolling a die because you can wind up in anystate during any roll but an enviornment in which an agents state is determined by something simpler like an increment or decrement would be non ergodic because the next state would be predefined and thus would be alot easier to compute.

    6. It is unreasonable to expect a reasonable model or no model. A partial model would be the most reasonable to expect and i dont think the agent should be programmed to do the same action everytime because after all the model isnt an accurate representation of the enviornment

  7. 1. it would be fully accessible because it has rules to follow, it is deterministic because there are moves you can make to avoid losing pieces, and it is static because it doesn't change when no one is doing anything. The only thing that would change with kreigspiel is that it will not be deterministic.

    2. yes, you could have a partial observability with a person in a corn maze, and full observability with someone in a plane above the maze.

    3. no i don't think you could make the same environment because if it is deterministic you do know what everything will do, but with a stochastic environment you will still have random variables that will make it differ from the deterministic environment.

    4. i think it would be easier to create an agent with hard goals, because there is only 2 results from that, but with soft goals, you can get many results.

    5. ergodic would be anything that relayed on chance such as Craps where non-ergodic could be blackjack. it would be easier to create an ergodic agent.

    6. the most reasonable would be a partial model because it allows it to learn by choosing different actions.


Note: Only a member of this blog may post a comment.