Reinventing Fire

A Life in a Day, in 2024

July 15, 2014

Filed under: Annotation,Future,W3C,Web,Work — Magister @ 10:03 pm

I woke up startled; my glasses were ringing. I was late for a telcon… again. I’d stayed up working too late last night.

I slipped on my glasses and answered the call. Several faces popped up a few feet in front of my eyes. Okay, so it was a videocon… sigh. I muted and blanked my glasses, switched them to speakerphone, and placed them on the table, the lenses vibrating as speakers. I pulled on some clothes and rubbed my face awake, trotting into the bathroom with my glasses in my left hand. As I splashed some water on my face, I heard my name called from my glasses on the counter; “Doug, did you get in contact with them?”

“Specs, delay,” I told my glasses, and my phone agent told the other participants, politely, “Please wait for 10 seconds for a response.”

Drying my face quickly on a towel, I put my glasses back on, looked into the mirror, unblanked the camera and unmuted the mic, and replied, “Hey, folks, yes, sorry about that. I did talk to them, and they are pretty receptive to the idea. They have their own requirements, of course, but I’m confident that we can fold those into our own.” I noticed my own face in the display, broadcast from my camera’s view of my reflection in the mirror, and hastily straightened out my sleepy hair.

A few minutes later, when the topic had changed, I opened a link that someone dropped into the call, and started reading the document in the glasses’ display. With the limited space available, I scanned it in RSVP (rapid serial visual presentation) mode, but quickly found it too distracting from the simultaneous conversation, requiring too much concentration. So I muted and blanked again, and walked down the hall to my office. Ensconced in front of my big screen, I re-routed the call to use the screen’s videocamera and display.

On the screen, it was easier to scan the document at my leisure. I could easily shift my focus back to the conversation when needed, without losing my place in the document. I casually highlighted a few passages to follow up on later, and made a few notes. I did the same with another document linked from the telcon, and my browser told me that this page was linked to from a document I’d annotated several months before. I marked it to read and correlate my notes in depth after the call. One thing that stood out immediately was that both documents mentioned a particular book; I was pretty sure I’d bought a physical copy a couple of years before, and I stepped over to my bookshelves. I set my glasses camera on auto-scan, looking for the title via OCR, and on the third set of shelves, my glasses blinked on a particular book; sure enough, I had a copy. I guess I could have simply ordered a digital version, but I already the physical edition handy, and sometimes I preferred having a real book in my hands.

My stomach started grumbling before the call ended. I decided to go out to lunch. Throwing the book and one of my tablets into my bag, I asked my glasses to pick a restaurant for me. It scanned the list of my favorites, and looked also for new restaurants with vegetarian food, looking for a nice balance between distance, ratings, and number of current patrons. “I’ve found a new food truck, with Indian-Mexican fusion. It’s rated 4.5, and there are several vegetarian options. Dave Cowles is eating there now. It’s a 7-minute drive. Is that okay, or should I keep looking?”

“Nope, sounds great. Call Dave, would you?” A map popped up, giving me an overview of the location, then faded away until it was needed. A symbol also popped up, indicating that my call to Dave had connected, on a private peer session.

“Hey, Doug, what’s up?”

“I was thinking of going to that food truck…”, I glanced up and to the right, and my glasses interpreted my eye gesture as a request for more context information, displaying the name of the restaurant, “… Curry Favor. You’re there now, right? Any good?”

“I just got here myself. Want me to stick around?”

“Yeah, I’ll be there in about 10 minutes.” I headed out the door, and unhooked my car’s charger before I jumped in. My glasses showed the next upcoming direction, and the car infographics; the car had a full charge. “Music”, I said, as I drove off; my car interface picked a playlist for me, a mix of my favorites I hadn’t heard in a while, and some new streaming stuff my music service thought I would like. As I got out of range of my house’s wifi, my glasses switched seamlessly to the car’s wifi. It was an easy drive, with my glasses displaying the optimal route and anticipating shifting traffic patterns and lights, but I still thought how nice it would be to buy one of the self-piloted cars. My car knew my destination from my glasses, and it alerted me that a parking spot had just opened up very near the food truck, so I confirmed and it reserved the spot; I’d pay an extra 50¢ to hold the spot until I arrived, but it was well worth it. My glasses read out the veggie menu options out loud on demand, and I chose the naan burrito with palak paneer and chick peas; my glasses placed my order in advance via text.

I pulled into my parking space, and my glasses blinked an icon telling me the sub-street sensor had registered my car’s presence. Great parking spot… I was right across the street from the food truck. I walked over to the benches where Dave sat. “Hey, Dave.”

We exchanged a few words, but my glasses told me my order was ready in a flash. I went to the window, and picked up my burrito; the account total came up in my view, and I picked a currency to pay it; I preferred to use my credit union’s digital currency, and was glad when the food truck’s agent accepted it. “Thanks, man,” I smiled at the cashier.

Dave and I hadn’t seen each other in a while, and we caught up over lunch. It turned out he was working on a cool new mapping project, and I drilled him for some details; it wasn’t my field, but it was interesting, and you never knew when a passing familiarity might come in handy. With his okay, my glasses recorded part of our conversation so I could make more detailed notes, and his glasses sent me some links to look at later. We finished our food quickly –it was tasty, so I left a quick positive review– and walked to a nearby coffee shop to continue the conversation. While we were talking, Dave recommended an app that I bought, and I also bought a song from the coffee shop that caught my ear from their in-house audio stream; Dave and the coffee shop each got a percentage of the sale. I learned that the coffee shop got an even bigger share of the song, because the musician had played at their shop and they’d negotiated a private contract, in exchange for promotion of her tour, which popped up in my display; that was okay, I liked supporting local businesses, and I filed away the tour dates in my calendar in case it was convenient for me to go to the show.

Dave went back to work, and I settled into the coffee shop to do some reading. First I read some of the book I’d brought, making sure to quickly glasses-scan the barcode first so I could keep a log; I found several good pieces of information, which I highlighted and commented on; my glasses tracked my gaze to OCR the text for storage and anchoring, and I subvocalized the notes. I then followed up on the links from earlier; my agent had earned its rate, having found several important correlations between the documents and my notes, as well as highly-reputed annotations from others on various annotation repos, and I thought more about next steps. I followed a few quick links to solidify my intuition, but on one link, I got stopped abruptly at an ad-wall; for whatever reason, this site insisted I watch a 15-second video rather than just opting-in to a deci-cent micropayment, as I usually did when browsing. I tolerated the video –unfortunately, if I took my glasses off while it played, the ad would know– only to find that the whole site was ad-based… intolerable, so I did some keyword searching to find an alternate site for the information.

Light reading and browsing was fine in a public place, but to get any real work done, I needed privacy. I strolled back to my car –my glasses reminding me where I’d parked– and I returned home. Back in my office, I put on some light music, and started coding. I started with a classic HTML-CSS-SVG-JS doc-app component framework on my local box, because I was old-school, and went mobile from there, adding annotated categories to words and phrases for meaning-extraction, customizing the triple-referenced structured API, dragging in a knowledge-base and vocabulary for the speech interface and translation engine, and establishing variable-usage contract terms with service providers (trying to optimize for low-cost usage when possible, and tweaking so the app would automatically switch service providers before it hit the next payment threshold… I’m cheap, and most of my users are too). I didn’t worry much about tweaking the good-enough library-default UI, since most users would barely or rarely see any layout, but rather would interact with the app through voice commands and questions, and see only microviews; I paid more attention to making sure that the agents would be able to correctly index and correlate the features and facts. Just as I was careful to separate style from content, I was careful to separate semantics from content. At some point, I reflected, AIs would get powerful enough so that information workers wouldn’t have such an easy time making a living; I wondered if we’d even need markup or APIs or standards at all, or if the whole infrastructure would be dynamic and ad-hoc. Maybe the work I was doing was making me obsolete. “‘Tis a consummation devoutly to be wished,” I thought to myself wryly.

I put the finishing touches on the app prototype, wrote some final tests, and ran through a manual scenario walk-through to pass the time while the test framework really put the app through its paces, spawning a few hundred thousand virtual unique concurrent users. Other than a few glitches to be polished up, it seemed to work well. I was pretty proud of this work; the app gave me real-time civic feedback, including drill-down visualization, on public policy statements, trawling news sites, social networks, and annotation servers for sentiment and fact-checking; it balanced opinion with cost-benefit risk-scenarios weighted by credibility and likelihood, and managed it all with voting records of representatives. It also tracked influence, either by lobbying or donations or inferred connections, and correlated company ownership chains and investments, to give a picture on who was pushing who’s buttons, and it would work equally well for boycotting products based on company profiles as it would on holding politicians accountable. As part of the ClearGov Foundation’s online voting system, it stood a chance of reforming government, though it was getting more adoption in South America and Africa than it was in the US so far. Patience, patience…

Megan came home from work with dinner from a locavore kitchen; the front door camera alerted me to her approach, and I saw she had her hands full. “Open front door,” I told the house as I rose to help her. We ate in front of the wallscreen, watching some static, non-interactive comedy streams; we were both too tired to “play-along” with plots, character POV, or camera angles, and it wasn’t really our style anyway. I hadn’t gotten enough rest the night before, so I turned in early to read; the mattress turned off the bedside light when it sensed my heart-rate and breathing slow into sleep.

Note: This story of the Web and life in 2024 is clearly fictional; nobody would hire someone who’d worked in web standards to do real programming work.

Invisible Visualization

April 22, 2014

Filed under: Accessibility,Audio,SVG,Work — Magister @ 1:27 am

Last year, I put together a talk called “Invisible Visualization” on making accessible data visualizations. Several people have asked me about it, so I thought I’d write a post about it.

By “accessible”, I mean able to be consumed and understood by people with a variety of sensory needs, including people with visual limitations, cognitive impairments, mobility challenges, and other considerations. I provided a few simple ways to expose the data in SVG images, but mostly I described different constraints and suggested ways of thinking about the problems.

I didn’t want to lecture people about the need for making their content accessible; I wanted to excite them about the possibilities of doing so. It’s great that there are legal regulations addressing the needs of people with disabilities (like the “Section 508” laws here in the US), but that’s not going to empower and motivate developers and designers to want to meet these kinds of design constraints and solve these kinds of technical challenges. I sought to avoid the “threat and guilt” trap that I’ve seen too many accessibility talks fall into.

(more…)

Archived Link Thunderbird Extension

August 24, 2011

Filed under: Code,Email,Technical,W3C,Work — Magister @ 12:21 am

This week is our first Geek Week at W3C. The idea is to have a week where we improve our skills, collaborate with each other on prototypes and fun projects that help W3C, and to come up for air from our everyday duties. I’m working on a few projects, some small and some larger.

One of my projects is to make a plugin for Thunderbird, my email client of choice, which exposes the Archived-At email message header field, normally hidden, as a hyperlink. This is useful for W3C work because we often discuss specific email messages during teleconferences (“telcons”), and we want to share links to (or otherwise find or browse to) the message in W3C’s email archives. It’s also handy when you are composing messages and want to drop in links referring to other emails. (I do way too much of both of these.)

I’ve made extensions for Firefox before, but never for Thunderbird, so this was an interesting project for me.

(more…)

Retain Accessibility Immediately

June 30, 2011

Filed under: Accessibility,Browsers,Canvas,HTML5,Standards,SVG,Technical,W3C,Work — Magister @ 12:43 am

There has been a heated argument recently on the W3C Canvas API mailing list between accessibility advocates and browser vendors over a pretty tricky topic: should the Canvas API have graphical “objects” to make it more accessible, or should authors use SVG for that? I think it’s a false dichotomy, and I offer a proposal to suggests a way to improve the accessibility potential of the Canvas 2D API by defining how SVG and the Canvas 2D API can be used together.

This brings together some ideas I’ve had for a while, but with some new aspects. This is still a rough overview, but the technical details don’t seem too daunting to me.

(more…)

Getting In Touch

January 29, 2011

Filed under: Browsers,Standards,Tech,Technical,TouchTablet,W3C,Work — Magister @ 4:15 pm

Last week, I published the first draft (and subsequent updates) of the Web Interface specification, which defines touch events. This is the first spec from the new W3C Web Events Working Group.

Peter-Paul Koch (PPK) gave it a positive initial review. Apparently, others thought it was news-worthy as well, because there were a few nice write-ups in various tech sites. Naturally, cnet’s Shank scooped it first (he has his ear to the ground), and it was fairly quickly picked up by macgasm, electronista, and Wired webmonkey.

I thought I’d go into a few of the high-level technical details and future plans for those who are interested.
(more…)

Translation Services at a Loss for Words

November 15, 2010

Filed under: Accessibility,Search Engines,SVG,Tech,Technical,Travel,Work — Magister @ 11:27 pm

Text in SVG is text. Visually, you can use webfonts like WOFF or SVG Fonts (where they are supported, like in Opera or the iPhone) to make it look cool, and you can style both the stroke and fill to make it all fancy, or apply filters to pop it out or make it glow or give it a dropshadow, but it’s not just a raster image like many text headers… it’s human- and machine-readable text, as nature intended.

So, SVG is translatable, right?
(more…)

SVG Game Resources

October 13, 2010

Filed under: Games,SVG,SVG Basics,Technical,Work — Magister @ 12:36 pm

Mozilla is holding an Open Web Games competition. I expect that many of the games will be use the Canvas API, since many programmers are more familiar with the imperative programming mode, and there are some games libraries that have been developed for Canvas or adapted from existing drawing or gaming libraries.

But I’m calling for SVG developers and designers to step up to the plate, as well. SVG has a lot of features that make it easier out of the box to build interfaces, animations, and even games. There is a scene graph, and the DOM event model that gives you free hit detection for pointer events, for example. And I’d love to see someone make an open-web game that’s both accessible and fun…

To help developers along, I thought I’d share a few free, open-source SVG resources that could be useful in building games:
(more…)

Pointers on Background of SVG Borders on Mutiny

August 21, 2010

Filed under: CDF,Standards,SVG,SVG 2,Technical,W3C,Work — Magister @ 5:37 pm

With SVG being integrated more and more into HTML5, both included via <object> and <img> elements, and inline in the same document, some natural questions about SVG and CSS are receiving more focus. This includes box model questions like background and border, and pointer events.

I’m interested in comments from the community on what direction SVG should take.

(more…)

Older Posts »

Powered by WordPress