Well today being a Saturday I have taken it upon myself to not really do much work and relax. In hindsight I think I relaxed far too much. Done nothing today. Well done nothing but think and dream. The concept of the personal intelligent agent just seems to encapsulate everything.
These days everything seems to have some link to Twitter. I constantly get bombarded with updates from peoples personal joggers software [link]. I even get updates on well people do in games. OK well a certain game, Canabalt [link]. The updates are only really coming from the iPhone and iPod Touch versions of the game. It is highly annoying but it is still interesting that I can still gather information on how well I play that game. Saying all this Twitter isn’t alone in the updates, Facebook gets a number of them as well from PlayStation Network updates on games, purchases and trophies gained. These are only the handful of statistics and data that I encounter on using such websites so with a massive push you can really tell how much data can be gathered on one person and know really what they get up to everyday.
With all this information there is one example I seem to keep using to somewhat explain the concept of the agent and the web service. OK with fridges having technology to know what’s in it from food purchase history and what not we would be able to track what people eat. My music listening habits are track by Last.fm. My personal generated updates are gathered. My video gaming habits are available. Last and not least my exercise habits (i.e. walking) is also stored. With this information a usual pattern of what I do on a daily basis is generated. This would be the normal John Geoffrey. Then lets start saying that my twitter updates start using far more dark and negative words, this would mean I might be in a bad mood. If my listening habits start going to some darker music would also mean the same. Then alone with this and if my gaming just stopped, food just wasn’t being eaten and my walking was next to nothing the web agent would be able to gather this information and use it withing a web service such as WebMD. This might be able to come to a conclusion I may be heading into a depression. This would then give me an email or text on what to do to bea the depression (i.e. exercise more, eat healthly).
This example really is from an ideal world where everything makes sense and such but you can see how if everything is tracked and stored in a data form the information could be passed on and used in a number of ways to help oneself.
Hope this helped explain it a bit more and for now I am off to cook some dinner and then get reading and some writing done on this whole project. Until next time.
I meant to start this blog for my university final year project about two weeks ago but I got caught up in webcams not working and actually trying to find the definition of what my project entails. This leaves me in a nice little place to write a wee blog on what I’ve been getting up to in the last fortnight.
To begin with I wanted to do my project based around statistical usage within games and websites to help automatically generate a more intuitive experience for the end user. This kind of thinking stemmed from some coursework I had to do last semester and so I had a little background research done on it already. The main areas I was looking at was heat maps, server logs and questionnaire. This was fine and all but through further reading I was having a hard time finding out what technologies were currently being used within the games industry for analysis purposes. The main culprit that came up was MCP which is built into Unreal Engine 3. [link] Beyond actually some built in tools I found out that most work seems to be outsourced to companies such as e4e [link]. With so much a stake in the business world in relation to games all further information regarding such analysis is kept well hidden. It was at this point that I started to think about dropping the idea of having the analysis engine being related to games.
The World Wide Web. Who wouldn’t want to analyze how that is being used, but where to begin? The internet we know it is based around the idea of push and pull. These seem like the perfect way to work but they all lie on the same archaic system, SERVERS! These “servers” are everywhere. Anybody can run these “servers”. Anybody can use these “servers”. Great for freedom of speech but not so for analysis. .. OK I do feel like I am complaining about how the internet technologies work but I don’t mean to. It just all feels really old to me. So the problem here is that there are two points at which to collect information: client and server. These two collection set do relate to each other but it proves relatively impossible. The logistics of having the server to talk to the client about what the client done pose a problem in the areas of when to send the information, how much information to send and who should do the processing on this information? So with this I started to much more focus on what a client wants from a website and aim at analysis on the server side.
Server side data collection comes in the form of web logs. This will keep track of what files were accessed and when. This is very basic web log and so further information can be gathered. Session IDs are used to track individuals, this leads on to knowing the path that a certain user takes on the website. Though these IDs are useless if the user leaves the computer for a while and then returns to what they believe to be the same session. It all really brings up the problem of the client again. When a webpage is requested the media content contained in that page will be passed along as well. What if the user had no need for the images and such? The server would still believe that they do and so continues to send them.
Client side data collection can be very powerful in determining what it is that a user actually does with a website. It can track the pages viewed, how long they viewed it for and if the content on a certain site was sufficient or did they go to another to find further information. This kind of data would be hugely useful for a server user as to try and find out why the user didn’t find their site entirely useful but they can’t get at this information. Well not right now with standard web browsers.
With this line of thinking going I started to voyage into the world of intelligent agents and web services. I see these as being two parts of the same overall solution. An intelligent agent (in my opinion) is to be used on the personal level. It will be your buddy. It will travel the web with you no matter what device you use and help give you ideas of what else to be doing with your internet time. Though this agent is a personal one it cannot be local only as it needs to track you where ever you go. I feel that web services, and by extension expert systems, are the other part of the solutions. The agent can use web services to find out what new music is available on an online store in your local area. If there are the agent will check with its knowledge base to see if you already have this music, like this similar music and even to a certain extent if you are likely to listen to something similar based on current listening trends.
That is about the point at which my thinking is at the moment. I am not actually too sure if I have some what explained what it really is that I am planning on doing but within in the next few days I shall be writing further blogs on certain aspects of this project to not only help you but to help myself.