Thursday, August 30, 2007

[cse 471] Instructions on submitting Assignment-0 (LISP refresher)

Hello all,

I hope all of you are progressing well with the first LISP project (a.k.a assignment-0). Here are the instructions for its submission.

-  You must submit a hardcopy of your report in the class. The report should have your code with a sample run of the solution for each problem. (say 2-3 input cases). You can add short comments for your solutions if you think it's necessary.

-  And, you should submit only the "Code" part of the report in the blackboard section of this course (on http://my.asu.edu). I have opened a link for uploading your file -- check under "Assignments" section on blackboard. Put all your functions in a .lisp file, name the file as <firstname_lastname>.lisp and upload it. (Note that you can submit only once).

Let me know if you have any questions.

Thanks,
Aravind

[Announcement][*Important*]: Missing Question 5 added to Homework 1

Question 5 was missing from homework (you had 6 after 4). I added question5--
please retrieve it again if you already retrieved it yesterday.

Rao

Wednesday, August 29, 2007

[Announcement] Project 1 assigned and is available on the homepage..

You should be able to get started on Task 1. Task 2 (and 3) will be easy to start after next class.

rao

[Announcement]: Additional problems added to Homework 1. It is now due on 9/10

[8/29 Discussion/Thinking Cap topic for BLOG] Search algorithms...



Here are a few things that you can think about and/or comment on.

[0] (optional ;-) Estmate how near-sighted an instructor needs to be to not
notice that you are sleeping and/or doing something below the desk. The estimate can
be analytical, and can depend on your height, distance from the podium, decibels of snoring etc.


[1] (Iterative deepening search)
 
The following are two of the characteristics of iterative deepening search:

a. It uses depth-first search in each of its iterations
b. it increments the "depth cutoff" for the tree uniformly (i.e., there is one single depth limit
for all branches)

If I am considering the following changes to IDDFS

a': Instead of use depth-first search, use breadth first in each iteration
b': instead of using a uniform depth cutoff, use non-uniform depth cutoffs (i.e, the "search fringe" will
go to varying depths in different branches).

How do a' and b' affect

(i) completeness (ii) optimality (iii) time complexity (iv) space complexity properties?


[2](Effect of the goal distribution)
In the discussion we did in the class, I assumed that the goal node(s) can appear at any level of the
search tree, and anywhere in that level. If I happen to know that all goal nodes will appear only at level (depth) "m"
would it change the relative tradeoffs between breadth-first and depth-first? Can you think of some problems where
this type of property holds?

[3](search on finite graphs with cycles)
We noticed that even if the number of states is finite, it is possible for a search to "loop"--i.e., go back and forth between
the same state multiple times. This is a problem for both breadth-first and depth-first, but is a bigger problem for depth-first
(since it may not even terminate). One way of stopping this problem is to maintain a  "CLOSED" list that contains all the nodes that have
been already expanded. Before adding any new node to the queue (i.e., OPEN list), we check to make sure if the state for that
node already exists on the CLOSED list. If it does, we don't put it back on the OPEN.

How does this simple idea affect the time and space complexity of depthfirst and breadth-first search?


Rao

Monday, August 27, 2007

Understanding the brain and then building intelligent agents

This is regarding the discussion we had in class the other day. Here is a talk by a guy who's trying to do model the brain to help build intelligent agents.

(more on) whether we have strong linguistic knowledge ingrained at birth..(optional reading)

In the class, I mentioned that all human infants come into this world with what can be thought of as a "universal grammar" that they can "tune" to the local language that they are hearing around them. In otherwords, language is not wholly learned outside--contrary to the common wisdom (the story of how "universal grammar" came about is also a fascinating one--see the mail that I sent to the class last year http://rakaposhi.eas.asu.edu/f06-cse471-mailarchive/msg00090.html  )
 
One question is whether there is really something special about the set of "human languages" as a whole that is different from any other languages.
 
In particular, if the human baby were to be given away to martians (or other aliens that regularly visit Area 51 and certain phoenix suburbs), would the baby be able to master the martian language? Conversely, if the human babies were to get together--without any intervention from adults, and were to make a brand new language, would it be closer to all the other human languages than it is to any other language?
 
We can answer the latter very much in affirmative thanks to the fascinating real life story of Nicaraguan Sign Language. A bunch of nicaraguan deaf kids who were ignored by their war-torn society and over a period of time developed a new sign language all their own from scratch. And it is *very* similar to other human languages  (see the URL http:/ /en.wikipedia.org/wiki/Nicaraguan_Sign_Language ).
 
We don't yet have a clear and convincing evidence that babies can't learn martian and other alien languages, but we do know that human kids brought up without human contact are unable to develop language (see http://en.wikipedia.org/wiki/Feral_child )--in otherwords, the underlying universal grammar is able to identify and adapt only to "human" languages!
 
In our zeal to accentuate differences, we fail to note that in the spectrum of possible languages, all human languages form a really tight cluster--and would be seen so by a martian visiting earth..
 
Rao
 
 
 

[Discussion Topic--must comment] Optimality ; prior knowledge; discounted rewards; Environment vs. Agent complexity..

[[Folks:
 By now you all had enough time to get yourself signed up to the class blog. As I said, participation is "required" in this class. Participation involves
doing assigned readings, asking questions (as needed) in the class and most importantly, taking part in the class blog discussions. Here is the first discussion topic
for your edification.

As for the quanity vs. quality of your comments, I suggest you go by the Woody Allen quote below for guidance.. ;-)]]


Here are some of the things that I would like to see discussion/comments from the class

1. Optimality--given that most "human agents" are anything but provably optimal, does it make sense for us to focus on optimality of our agent algorithms? Also, if you have more than one optimality objective ( e.g., cost of travel and time of travel), what should be the goal of an algorithm that aims to get "optimal" solutions?

2. Prior Knowledge--does it make sense to consider agent architectures where prior knowledge and representing and reasoning with it play such central roles? Also, is it easy to compare the "amount" of knowledge that different agents start with?

3. Environment vs. Agent complexity--One big issue in agent design is that an agent may have very strong limitations on its memory and computational resources. A desirable property of an agent architecture should be that we can instantiate it for any <agent, enviornment> pair, no matter how complex the enviornment and how simplistic the agent. Comment on whether or not whether or not this property holds for the architectures we saw. Also, check out "Simon's Ant" on the web and see why it is related to this question.

4. Learning--In the class, we said that an agent can learn (and improve) its knowledge about how the world evolves and how its actions affect the world etc. One thing that was not clarified is whether "utilities" are learned or given/hard-wired. Any comments (using your knowledge of humans)?

5. Anything else from the first three classes that you want to hold-forth on.

Rao


----------------

"
The question is have I learned anything about life. Only that human being are divided into mind and body. The mind embraces all the nobler aspirations, like poetry and philosophy, but the body has all the fun. The important thing, I think, is not to be bitter... if it turns out that there IS a God, I don't think that He's evil. I think that the worst you can say about Him is that basically He's an underachiever. After all, there are worse things in life than death. If you've ever spent an evening with an insurance salesman, you know what I'm talking about. The key is, to not think of death as an end, but as more of a very effective way to cut down on your expenses. Regarding love, heh, what can you say? It's not the quantity of your sexual relations that counts. It's the quality. On the other hand if the quantity drops below once every eight months, I would definitely look into it. Well, that's about it for me folks. Goodbye. "
                ---Boris in Love & Death (1975 http://us.imdb.com/title/tt0073312/ )

Sunday, August 26, 2007

slides from LISP recitation

Hello all,
 
   Please find the slides of the LISP recitation on last friday attached. (also linked at:http://www.public.asu.edu/~akalavag/fall2007_aravind_LISP_recitation.ppt ).
 
 Some of you were asking me if I can recommend any quick online tutorial for LISP. You can use "LISP Primer" (http://mypage.iu.edu/~colallen/lp/). It is shorter with good examples, and covers all the stuff needed for LISP projects in this course.
 
Thanks,
Aravind

Saturday, August 25, 2007

Interesting article on out of body experiences...


(if you like this sort of thing, you might also consider seeing "Why you can't tickle yourself" http://learning.eng.cam.ac.uk/wolpert/talks/tickle.ram
or the full talk at http://learning.eng.cam.ac.uk/wolpert/talks/wolpert.ram )

Rao
-----------------

  The New York Times


August 23, 2007

Scientists Induce Out-of-Body Sensation

Using virtual reality goggles, a camera and a stick, scientists have induced out-of-body experiences — the sensation of drifting outside of one's own body — - in healthy people, according to experiments being published in the journal Science.

When people gaze at an illusory image of themselves through the goggles and are prodded in just the right way with the stick, they feel as if they have left their bodies.

The research reveals that "the sense of having a body, of being in a bodily self," is actually constructed from multiple sensory streams, said Matthew Botvinick, an assistant professor of neuroscience at Princeton University, an expert on body and mind who was not involved in the experiments.

Usually these sensory streams, which include vision, touch, balance and the sense of where one's body is positioned in space, work together seamlessly, Prof. Botvinick said. But when the information coming from the sensory sources does not match up, when they are thrown out of synchrony, the sense of being embodied as a whole comes apart.

The brain, which abhors ambiguity, then forces a decision that can, as the new experiments show, involve the sense of being in a different body.

The research provides a physical explanation for phenomena usually ascribed to other-worldly influences, said Peter Brugger, a neurologist at University Hospital in Zurich, Switzerland. After severe and sudden injuries, people often report the sensation of floating over their body, looking down, hearing what is said, and then, just as suddenly, find themselves back inside their body. Out-of-body experiences have also been reported to occur during sleep paralysis, the exertion of extreme sports and intense meditation practices.

The new research is a first step in figuring out exactly how the brain creates this sensation, he said.

The out-of-body experiments were conducted by two research groups using slightly different methods intended to expand the so-called rubber hand illusion.

In that illusion, people hide one hand in their lap and look at a rubber hand set on a table in front of them. As a researcher strokes the real hand and the rubber hand simultaneously with a stick, people have the vivid sense that the rubber hand is their own.

When the rubber hand is whacked with a hammer, people wince and sometimes cry out.

The illusion shows that body parts can be separated from the whole body by manipulating a mismatch between touch and vision. That is, when a person's brain sees the fake hand being stroked and feels the same sensation, the sense of being touched is misattributed to the fake.

The new experiments were designed to create a whole body illusion with similar manipulations.

In Switzerland, Dr. Olaf Blanke, a neuroscientist at the École Polytechnique Fédérale in Lausanne, Switzerland, asked people to don virtual reality goggles while standing in an empty room. A camera projected an image of each person taken from the back and displayed 6 feet away. The subjects thus saw an illusory image of themselves standing in the distance.

Then Dr. Blanke stroked each person's back for one minute with a stick while simultaneously projecting the image of the stick onto the illusory image of the person's body.

When the strokes were synchronous, people reported the sensation of being momentarily within the illusory body. When the strokes were not synchronous, the illusion did not occur.

In another variation, Dr. Blanke projected a "rubber body" — a cheap mannequin bought on eBay and dressed in the same clothes as the subject — into the virtual reality goggles. With synchronous strokes of the stick, people's sense of self drifted into the mannequin.

A separate set of experiments were carried out by Dr. Henrik Ehrsson, an assistant professor of neuroscience at the Karolinska Institute in Stockholm, Sweden.

Last year, when Dr. Ehrsson was, as he says, "a bored medical student at University College London", he wondered, he said, "what would happen if you 'took' your eyes and moved them to a different part of a room? Would you see yourself where you eyes were placed? Or from where your body was placed?"

To find out, Dr. Ehrsson asked people to sit on a chair and wear goggles connected to two video cameras placed 6 feet behind them. The left camera projected to the left eye. The right camera projected to the right eye. As a result, people saw their own backs from the perspective of a virtual person sitting behind them.

Using two sticks, Dr. Ehrsson stroked each person's chest for two minutes with one stick while moving a second stick just under the camera lenses — as if it were touching the virtual body.

Again, when the stroking was synchronous people reported the sense of being outside their own bodies — in this case looking at themselves from a distance where their "eyes" were located.

Then Dr. Ehrsson grabbed a hammer. While people were experiencing the illusion, he pretended to smash the virtual body by waving the hammer just below the cameras. Immediately, the subjects registered a threat response as measured by sensors on their skin. They sweated and their pulses raced.

They also reacted emotionally, as if they were watching themselves get hurt, Dr. Ehrsson said.

People who participated in the experiments said that they felt a sense of drifting out of their bodies but not a strong sense of floating or rotating, as is common in full-blown out of body experiences, the researchers said.

The next set of experiments will involve decoupling not just touch and vision but other aspects of sensory embodiment, including the felt sense of the body position in space and balance, they said.


Friday, August 24, 2007

Re: Homework 1 (question1)

Yes.

Accessibility is synonymous with Observability
(which are both synonymous with "senseability"--perefect sensing<->Full accessibility<->Full observability

rao


On 8/24/07, Kyle Luce <kyle.luce@asu.edu> wrote:
Hi,

I had a small question about P1 on HW1.  I noticed the term "accessibility" and I was not sure what this meant in the term of environment.  Does it just mean "fully Observable"?  This term is repeated a few times in this HW in other problems, so I thought I better clarify to make sure.

Thanks in advance!
Kyle Luce

Wednesday, August 22, 2007

LISP refresher on friday (CSE 471)

Hi all,
 
I am your TA for cse 471 (Intro to AI) course.
 
As announced in today's lecture, I will be holding a "LISP refresher" session this Friday (24th august).
 
Timings: 10:30 to 11:30 AM
Venue:  conference room no. 576 (BYENG building - 5th floor)
 
For those who can't make it, you can as well meet me during my TA office hours tomorrow (3-4 PM CSE Open Lab) and we can discuss your questions/doubts to help you start off with LISP.
 
Thanks,
Aravind

Monday, August 20, 2007

[cse471/598] Reading for next class: Chapter 2 in R&N

The bulk of the next class will be devoted to discussion of Chapter 2 of the text book.
Please read it before coming to the class.
rao

[cse471/598] More announcements

A couple more announcements:
1. The class lecture notes and audio are posted to the class page (click on the lecture notes link on right pane).
2. All through the semester you can send "anonymous" feedback to me (the instructor)
(In the interests of full disclosure I should point out that the mail header captures the IP address of the machine you are sending email from. So you might want to send from a generic m/c rather than something like toms-laptop.asu.edu )
Rao

Class Mailing List and Blog are set up..



If you are receiving this mail, you are included on the class mailing list, and must also
have received an invitation to join the class blog.
The class blog is at
Any one can read it, but only registered students can post it (you would have gotten an invitation which
gives you permission to post).
The mails sent to the class mailing list are automatically sent to your email, archived at
as well as posted to the blog. On the blog version, you can add your comments.
that is all for now.
rao