Context-aware systems

These on-going research are from Augmented Perception Lab, so below will only be overviews of concepts.

 

Optimint (opportunistic-micro-task-interleaving)

Working with Hyunsung Cho and David Lindlbauer

There are numerous fragmented moments in our day where we have spare time between our main tasks. When you are traveling to work downtown for 30 minutes commute or when you waiting for the next Zoom call to start between your lunch break for 20 minutes. Aside from purposeful downtime, I wanted to create a system that makes use of these wasted scrap time to be potentially productive.

What if there was a system that is aware of your current situational limitation and suggest appropriate micro-task you could complete within the time window? The goal would be optimizing micro-tasks during opportune moments, or OPTIMINT.

1. Set Time Frame

2. Task Suggestion

3. UI Generation

Once the system is called, it asks the user for duration of the available time. Then it takes video input from the device and calls Gemini API to analyze the context of the user. With the context parameters set (available input and out modality to perform the task, duration, devices available, on-going/completed tasks) it suggests three most suitable task the user can perform at that given situation, using ILP. Once the user selects the task they want to perform, it will generate an appropriate UI the task can be carried out with.

 

FuncXR

Working with Hyunsung Cho, Ruonan Sun and David Lindlbauer

We designed a novel functionality-centric representation of XR UIs. There are three classes of application of FuncXR: generative cross-app widgets, adaptive level-of-detail control, and contextual recommendations.

I led the transition between the LLM outputs into Unity UI display utilizing MRTK3 and the first application method shown below.

For the first application, the user states what they want to perform such as “I want to turn off the light and play a song from the playlist” and FunXR will generate a custom widget that handles just these specific functionalities by combining SmartHome and Music controls into one display.

We compared three different representation (Full App, Functionality-based, Tree-based) for the third application, contextual recommendation, with 12 user studies that I partially conducted.

Previous
Previous

Poke Interaction

Next
Next

Pick Up Here, Place Over There