By James Crowley
Today I attended a talk from LTI Colloquium Series sponsored by Language Technologies Institute of School of Computer Science, Carnegie Mellon University. Professor Crowley made a presentation about the recent projects he has been managing at INRIA Grenoble Rhône- Alpes Research Center in Montbonnot. Below are the summary of the talk and my views about it:
In their research center, Professor Crowley and the researchers do research and produce commercial products and applications based on smart objects such as smart home. Hence, they have a fully equipped home which they used for their experiments in their research center. Professor presented sample scenarios for their smart home projects such as home logistics, episodic memory for kitchen and smart thermostat. In the scope of kitchen scenario for example, they log the information such as who put the yougurt into the refrigerator and who took it. They also investigate in other application domains such as polite and social interaction models for robots and recording events in a meeting room (like who is speaking, who is listening, who is looking to computer etc.). All these applications about which they do research are based on smart objects which are the ordinary daily objects augmented with computation and communication capabilities basically by attaching sensors and actuators. As the underlying model of those applications, they utilize from “Situation Model”, which is a model proposed for the doctoral dissertion study by a PhD student in Carnegie Mellon (professor’s advisee) back in 10 years ago.
This model is a widely used model in Cognitive Psychology to describe human abilities as I noted from his introductory part for the model. They defined a situation model as a set of relations between entities, i.e. state. Therefore, analogically this model sounds similar to state transition models. Relations between states are the truth functions which can be boolean or probabilistic. Behaviors attached to situations are the events and actions. As the result, a situation graph is a network of situations with transitions. Due to disconnected organization of the talk, even I could not get the overall architecture and the role of system components clearly, I will try to explain two more issues he explained, somehow disconnectedly (I do not mean the talk was unimpressive, but somewhat the connections between the topics were lack). In order to fulfill its commitment I mentioned in the episodic kitchen example, the smart home application needs to define roles and assign people to those roles. Therefore, the refrigerator would be able to infer that it was the Alice who put the yougurt and the Jane who took it. Role assignment is done based on the behaviors of the people such as their speaking.
Another concern is the underlying technology to infer who is doing what when and where. For this purpose, they use machine learning approach both in offline and online manner. Offline algorithms are used to learn prototype scripts for human behaviors. The rationale behind this is that most of the people share common behaviors that can be used as prototype scripts. For example people lean to table in a similar manner. Online learning is for learning the service behavior of the applications by getting feedback from the learner. In other words, they pop up notifications to the user’s phone to remind something and the user may block it implying that it is not the proper time and the user is interrupted. Hence, it is a kind of punishment and the process is associated to reinforced learning. He roughly stated that they use several forms of decision trees which work well for their purposes.
I should admit that I was impressed by the talk Professor Crowley gave. I appreciate that they produce many different commercial and sophisticated products using those models (situation model and machine learning algorithms) with which most of the Computer and Information Science researchers are familiar and know that really easy to comprehend and implement. The most remarkable side of their applications different from Google’s and Apple’s products is that they do not perform computations or send data to the cloud. All the operations are done on mobile devices. Hence, their applications are more resilient to privacy and security breaches. As a security researcher, I really appreciate this and their effort to succeed all computations in a less energy demanding way on the mobile devices, which have strict limitations on power. The sole question I have after the talk is how they assigned the probabilities to transitions between situations. This kind of models are easy to implement on the paper, but in practice it is a hard question to associate a probability with the occurence of an event.