Thursday, August 30, 2007
I hope all of you are progressing well with the first LISP project (a.k.a assignment-0). Here are the instructions for its submission.
- You must submit a hardcopy of your report in the class. The report should have your code with a sample run of the solution for each problem. (say 2-3 input cases). You can add short comments for your solutions if you think it's necessary.
- And, you should submit only the "Code" part of the report in the blackboard section of this course (on http://my.asu.edu). I have opened a link for uploading your file -- check under "Assignments" section on blackboard. Put all your functions in a .lisp file, name the file as <firstname_lastname>.lisp and upload it. (Note that you can submit only once).
Let me know if you have any questions.
please retrieve it again if you already retrieved it yesterday.
Wednesday, August 29, 2007
Here are a few things that you can think about and/or comment on.
 (optional ;-) Estmate how near-sighted an instructor needs to be to not
notice that you are sleeping and/or doing something below the desk. The estimate can
be analytical, and can depend on your height, distance from the podium, decibels of snoring etc.
 (Iterative deepening search)
The following are two of the characteristics of iterative deepening search:
a. It uses depth-first search in each of its iterations
b. it increments the "depth cutoff" for the tree uniformly (i.e., there is one single depth limit
for all branches)
If I am considering the following changes to IDDFS
a': Instead of use depth-first search, use breadth first in each iteration
b': instead of using a uniform depth cutoff, use non-uniform depth cutoffs (i.e, the "search fringe" will
go to varying depths in different branches).
How do a' and b' affect
(i) completeness (ii) optimality (iii) time complexity (iv) space complexity properties?
(Effect of the goal distribution)
In the discussion we did in the class, I assumed that the goal node(s) can appear at any level of the
search tree, and anywhere in that level. If I happen to know that all goal nodes will appear only at level (depth) "m"
would it change the relative tradeoffs between breadth-first and depth-first? Can you think of some problems where
this type of property holds?
(search on finite graphs with cycles)
We noticed that even if the number of states is finite, it is possible for a search to "loop"--i.e., go back and forth between
the same state multiple times. This is a problem for both breadth-first and depth-first, but is a bigger problem for depth-first
(since it may not even terminate). One way of stopping this problem is to maintain a "CLOSED" list that contains all the nodes that have
been already expanded. Before adding any new node to the queue (i.e., OPEN list), we check to make sure if the state for that
node already exists on the CLOSED list. If it does, we don't put it back on the OPEN.
How does this simple idea affect the time and space complexity of depthfirst and breadth-first search?
Monday, August 27, 2007
[Discussion Topic--must comment] Optimality ; prior knowledge; discounted rewards; Environment vs. Agent complexity..
By now you all had enough time to get yourself signed up to the class blog. As I said, participation is "required" in this class. Participation involves
doing assigned readings, asking questions (as needed) in the class and most importantly, taking part in the class blog discussions. Here is the first discussion topic
for your edification.
As for the quanity vs. quality of your comments, I suggest you go by the Woody Allen quote below for guidance.. ;-)]]
Here are some of the things that I would like to see discussion/comments from the class
1. Optimality--given that most "human agents" are anything but provably optimal, does it make sense for us to focus on optimality of our agent algorithms? Also, if you have more than one optimality objective ( e.g., cost of travel and time of travel), what should be the goal of an algorithm that aims to get "optimal" solutions?
2. Prior Knowledge--does it make sense to consider agent architectures where prior knowledge and representing and reasoning with it play such central roles? Also, is it easy to compare the "amount" of knowledge that different agents start with?
3. Environment vs. Agent complexity--One big issue in agent design is that an agent may have very strong limitations on its memory and computational resources. A desirable property of an agent architecture should be that we can instantiate it for any <agent, enviornment> pair, no matter how complex the enviornment and how simplistic the agent. Comment on whether or not whether or not this property holds for the architectures we saw. Also, check out "Simon's Ant" on the web and see why it is related to this question.
4. Learning--In the class, we said that an agent can learn (and improve) its knowledge about how the world evolves and how its actions affect the world etc. One thing that was not clarified is whether "utilities" are learned or given/hard-wired. Any comments (using your knowledge of humans)?
5. Anything else from the first three classes that you want to hold-forth on.
"The question is have I learned anything about life. Only that human being are divided into mind and body. The mind embraces all the nobler aspirations, like poetry and philosophy, but the body has all the fun. The important thing, I think, is not to be bitter... if it turns out that there IS a God, I don't think that He's evil. I think that the worst you can say about Him is that basically He's an underachiever. After all, there are worse things in life than death. If you've ever spent an evening with an insurance salesman, you know what I'm talking about. The key is, to not think of death as an end, but as more of a very effective way to cut down on your expenses. Regarding love, heh, what can you say? It's not the quantity of your sexual relations that counts. It's the quality. On the other hand if the quantity drops below once every eight months, I would definitely look into it. Well, that's about it for me folks. Goodbye. "
---Boris in Love & Death (1975 http://us.imdb.com/title/tt0073312/ )
Sunday, August 26, 2007
Saturday, August 25, 2007
(if you like this sort of thing, you might also consider seeing "Why you can't tickle yourself" http://learning.eng.cam.ac.uk/wolpert/talks/tickle.ram
or the full talk at http://learning.eng.cam.ac.uk/wolpert/talks/wolpert.ram )
Scientists Induce Out-of-Body Sensation
Using virtual reality goggles, a camera and a stick, scientists have induced out-of-body experiences — the sensation of drifting outside of one's own body — - in healthy people, according to experiments being published in the journal Science.
When people gaze at an illusory image of themselves through the goggles and are prodded in just the right way with the stick, they feel as if they have left their bodies.
The research reveals that "the sense of having a body, of being in a bodily self," is actually constructed from multiple sensory streams, said Matthew Botvinick, an assistant professor of neuroscience at Princeton University, an expert on body and mind who was not involved in the experiments.
Usually these sensory streams, which include vision, touch, balance and the sense of where one's body is positioned in space, work together seamlessly, Prof. Botvinick said. But when the information coming from the sensory sources does not match up, when they are thrown out of synchrony, the sense of being embodied as a whole comes apart.
The brain, which abhors ambiguity, then forces a decision that can, as the new experiments show, involve the sense of being in a different body.
The research provides a physical explanation for phenomena usually ascribed to other-worldly influences, said Peter Brugger, a neurologist at University Hospital in Zurich, Switzerland. After severe and sudden injuries, people often report the sensation of floating over their body, looking down, hearing what is said, and then, just as suddenly, find themselves back inside their body. Out-of-body experiences have also been reported to occur during sleep paralysis, the exertion of extreme sports and intense meditation practices.
The new research is a first step in figuring out exactly how the brain creates this sensation, he said.
The out-of-body experiments were conducted by two research groups using slightly different methods intended to expand the so-called rubber hand illusion.
In that illusion, people hide one hand in their lap and look at a rubber hand set on a table in front of them. As a researcher strokes the real hand and the rubber hand simultaneously with a stick, people have the vivid sense that the rubber hand is their own.
When the rubber hand is whacked with a hammer, people wince and sometimes cry out.
The illusion shows that body parts can be separated from the whole body by manipulating a mismatch between touch and vision. That is, when a person's brain sees the fake hand being stroked and feels the same sensation, the sense of being touched is misattributed to the fake.
The new experiments were designed to create a whole body illusion with similar manipulations.
In Switzerland, Dr. Olaf Blanke, a neuroscientist at the École Polytechnique Fédérale in Lausanne, Switzerland, asked people to don virtual reality goggles while standing in an empty room. A camera projected an image of each person taken from the back and displayed 6 feet away. The subjects thus saw an illusory image of themselves standing in the distance.
Then Dr. Blanke stroked each person's back for one minute with a stick while simultaneously projecting the image of the stick onto the illusory image of the person's body.
When the strokes were synchronous, people reported the sensation of being momentarily within the illusory body. When the strokes were not synchronous, the illusion did not occur.
In another variation, Dr. Blanke projected a "rubber body" — a cheap mannequin bought on eBay and dressed in the same clothes as the subject — into the virtual reality goggles. With synchronous strokes of the stick, people's sense of self drifted into the mannequin.
A separate set of experiments were carried out by Dr. Henrik Ehrsson, an assistant professor of neuroscience at the Karolinska Institute in Stockholm, Sweden.
Last year, when Dr. Ehrsson was, as he says, "a bored medical student at University College London", he wondered, he said, "what would happen if you 'took' your eyes and moved them to a different part of a room? Would you see yourself where you eyes were placed? Or from where your body was placed?"
To find out, Dr. Ehrsson asked people to sit on a chair and wear goggles connected to two video cameras placed 6 feet behind them. The left camera projected to the left eye. The right camera projected to the right eye. As a result, people saw their own backs from the perspective of a virtual person sitting behind them.
Using two sticks, Dr. Ehrsson stroked each person's chest for two minutes with one stick while moving a second stick just under the camera lenses — as if it were touching the virtual body.
Again, when the stroking was synchronous people reported the sense of being outside their own bodies — in this case looking at themselves from a distance where their "eyes" were located.
Then Dr. Ehrsson grabbed a hammer. While people were experiencing the illusion, he pretended to smash the virtual body by waving the hammer just below the cameras. Immediately, the subjects registered a threat response as measured by sensors on their skin. They sweated and their pulses raced.
They also reacted emotionally, as if they were watching themselves get hurt, Dr. Ehrsson said.
People who participated in the experiments said that they felt a sense of drifting out of their bodies but not a strong sense of floating or rotating, as is common in full-blown out of body experiences, the researchers said.
The next set of experiments will involve decoupling not just touch and vision but other aspects of sensory embodiment, including the felt sense of the body position in space and balance, they said.
Friday, August 24, 2007
Accessibility is synonymous with Observability
(which are both synonymous with "senseability"--perefect sensing<->Full accessibility<->Full observability
I had a small question about P1 on HW1. I noticed the term "accessibility" and I was not sure what this meant in the term of environment. Does it just mean "fully Observable"? This term is repeated a few times in this HW in other problems, so I thought I better clarify to make sure.
Thanks in advance!