Time Should Fade (Almost) Everything…

Update: There aren’t APIs for most of the services that I’d want to use here, so I’m putting this project on pause for now. I’ll probably hack something together for my own use, but trying to turn this into a service doesn’t seem possible given API usage guidelines from these services.


I posted the following to Twitter the other day:

So, if all goes according to plan, all of my Twitter history up to yesterday-ish will be deleted, and I will have setup some code (that I control) that will delete everything older than 7 days on an ongoing basis.

I used to believe that everything posted on the Internet should stay, forever. I’m not so sure that is true. Published for public consumption, forever? Beginning to doubt that for most normal humans…

In the near future (weeks hopefully), I’m going to start automatically hiding old photos, blog posts, everything except those that seem really worthwhile to keep up indefinitely. I’m working on the rules still, curious what people think are good rules.

I’m almost certain that corporate social media policies, especially for public facing employees, should strongly recommend services that do the same – either delete or tighten up permissions after a window of time on public posts (And anything on FB)…

So, in short – you’re only going to see ~7 days of old tweets on my Twitter account. This post is about how I’m setting that up.

The short term hack

Twitter makes this hard (though I think this is unintentional). Specifically, they make it hard to access anything more than the last 3200 tweets in your account via the API. So, getting your account down to just the last 7 days ends up requiring two bits of software:

  1. Find a way to delete tweets older than my most recent 3200.
  2. Setup a process that watches my twitter feed regularly and deletes tweets older than 7 (or whatever) days.

Deleting all of my tweets

I decided I would delete all of my tweets to begin with. If twitter offered a native “archive” or unpublish option, a la Instagram, I may not have deleted everything. But they don’t, so this was my only option to start with a clean slate.[footnote]I didn’t feel too bad about this, because I had an out. As part of this process, I had to download my official twitter data archive, which has everything. On top of that, I use a bookmarking service called Pinboard that has a feature that copies all my tweets and makes them searchable, privately, just for me. (It does require the paid archive feature in order to get the full text of the tweet. Otherwise it only stores a truncated version of the text.)[/footnote]

I found a small script someone wrote on Github, forked it, and then modified it quite significantly. The script and instructions are on my Github account. You’ll need to be comfortable at the command line if you want to use it. It’s rough, and I offer no guarantees that it will run smoothly for you. Also, keep in mind – it will delete all of your tweets, and there is no undo. Keep your backup archive safe, and make sure this is what you want: delete everything.

To get around the 3200 tweet API issue mentioned above, the script uses the tweets.js file that comes in the data backup from Twitter, so the good thing is that you’re basically forced to download the backup to use the utility. That file contains the IDs for all of your tweets (among other things), which is all we need to issue the delete command for that tweet.

The ongoing culling of my older tweets

Again, I started with someone else’s code. I found a nice little project written in Go that leveraged AWS Lambda to run the little bot. I used this project as a chance to brush up on my Cloud Formation skills, as well. My fork, with CloudFormation templates, is on my Github account as well. There’s even a handy “Launch Stack” button if you want to set it up on your own AWS account.

The bot runs every few hours, looks for tweets in my account older than the interval I’ve configured, currently set at 7 days, and deletes them if it finds anything. It’s all pretty simple.

Making this a thing

As I started working through this, I starting thinking about enabling this for the other social media services I use. I don’t know why everything, from Flickr to Pinboard don’t offer ephemerality as a feature. If the feature is offered, it should be the default. As I mentioned at the start, I don’t believe we, as people, are prepared for a world with total recall of our every utterance. My thoughts on this are complicated[footnote]For example, I’m not in favor of the right-to-be-forgotten laws even as I want services to offer that capability on the individual service level…[/footnote], but suffice to say, I am going to build tools that allow me to manage my social media presence following these guidelines.

I mentioned this to a few folks, and got a few enthusiastic “I want that for my account!” comments. So, I’m going to spin this up as a side project and see what I can cobble together. If you take a look at the code I linked to above, it’s very simplistic – fine for a single account, but not the best for a real service.

The other aspect of this I’m working on is governance. I don’t want to do this as a business – that’s not a goal. What I do want is a service that has a strong privacy stance, that offers high trust to folks that use it. One of the reasons I didn’t use the public services that are out there is that their business model is unclear.[footnote]I do think the popular ones, like TweetDelete, seem like fine options. That one, for example, is owned by a hosting company that doesn’t seem to need the revenue from a tweet deletion service.[/footnote]

I am hoping to use this as an experiment in a cooperative form of governance for an online service, one where any charges are transparently used to maintain the service, where the source code is available for people to review, and where users can have some sort of assurance that the code that is released is the code that the hosted service is actually running. These seem like interesting problems regardless of the service being offered.

Because naming things is easily the most important and most fun part of any project (seriously, I have so many domain names!), I’ve decided to call this the Time Fades Project. A placeholder page is all that’s over there, but stay tuned for more.

If you have any interest in this sort of governance topic, or in contributing to the service, or in what a good set of default rules are for these sorts of ephemeral behaviors (I expect this will need to be different for different social networks), please get in touch.

Saying goodbye to Google Analytics, Hello Piwik

In my previous post, I mentioned that I was looking at reducing my dependence on free services as an experiment to see if I can improve my privacy.

That post was about changing my behavior as a consumer. This time, I’m looking at the services I use in my personal development work, especially those services that feed ad networks. In the case of my personal sites, this means Google Analytics (GA).

I did some looking around, and decided I want to be close to the same functionality as GA. It’s not a fair comparison if I don’t have the same features, so that criteria limited my choices.

After some poking around, I settled on running Piwik. It’s open source, free, and can be self-hosted on hardware I control. It seems extraordinarily customizable, though I haven’t done much here.

There are other choices like Mint or Woopra or Snowplow. There are actually more commercial options than I realized, in addition to the giants like Adobe’s Marketing Cloud (aka Omniture).

Ultimately, I chose Piwik because of it’s simplicity, it’s feature set, and it’s flexibility. I wanted to own the data[footnote]I’m surprised more companies don’t setup their own analytics, at least in addition to their primary service. A lot of interesting things can be done with the raw data.[/footnote] and I also wanted the ability to keep up with a massive site if necessary.

So far, so good.

My Piwik Setup

I setup a cheap VPS somewhere, created a domain to host the server, and then ran Piwik on that single box. It looks like the service could have been installed on my simple web hosting account at Pair Networks[footnote]Which I recommend: use this referral link to give me credit.[/footnote] with room to grow, but I wanted to work through the server setup for a refresher on setting up a VPS from scratch.

I then followed the Getting Started guide. That’s pretty much it.

Setup was also not as smooth as hoped. The configuration wizard has a bug, for example, that wouldn’t let it complete (I fixed it locally because I could – yay open source). It’s also non-trivial setting up an Apache server with SSL enabled when you haven’t done it in a while.

Cost

Piwik has a hosted option where I still own the data, but it’s not cheap (Piwik Cloud is minimally $29/month). Not the best option for me.

The good thing is that the software itself is free. Of course, nothing is exactly free. Here are some of the costs I ran into:

  • hosting for my own Piwik instance
  • SSL certificates to enable HTTPS
  • GeoIP database for accurate IP to location lookups (I ended up sticking with their free option)

This list doesn’t include my time getting all of this running, plus the time required each year to make sure the servers are secured, running the latest security patches/upgrades, and are monitored.

Looking back on it, Piwik Cloud might have been worth it when you consider the time & money spent.

Bye Google Analytics

I’ve removed Google Analytics from my main personal web projects, and replaced it with the Piwik tracking call. Since my server is the only thing that sees this data, hopefully people are more willing to whitelist the tracking domain in their ad blocker.

Some Observations on the Mobile Market in India

Living in India has given me a new perspective on a lot of things. Professionally, I’m constantly learning a lot just by seeing how differently people use their phones, and how different the market and ecosystem around mobile is here.

Intex? Karbonn? Spice Mobile? Oppo?

The biggest eye opener has been seeing the number of India-focused brands here in the smartphone market.

In the US, there are global brands like Apple, Samsung, HTC, Motorola, and LG and then a spread of other smaller brands. Here’s IDC’s list of the top smartphone vendors in India last year:

  1. Samsung
  2. Micromax
  3. Intex
  4. Lava
  5. Xiaomi

The market is also full of up & comers like Spice Mobile, and Karbonn, & other Chinese manufacturers like Oppo among many others I probably haven’t heard of.

Windows Phone is still a factor here, and I do still see Blackberrys from time-to-time, though they don’t rank in any public market reports I can see.

And feature phones – J2ME, basic phones… still big in India, too.

I’m Developing Android First

Most of the smartphones sold in India run Android. Android has a 90+% market share in India. Nothing else really is worth talking about, when you’re talking about smartphones.

So, while I personally am a dedicated iOS user, professionally in India, our teams think Android first. This is not the case for our US development teams, where both platforms have roughly equal priority and resources.

In the U.S., Fred Wilson’s 2010 call for Android first was bad advice then (and probably still is now), but the realities of emerging markets like India means doing exactly that. It’s literally the only game in town that matters.

Price is King, But Technology Matters

India is a price sensitive market. Obvious, right? But what I’ve found living here and talking to coworkers is that it isn’t just price that matters, but affordable value. From Ben Thompson’s excellent blog, Stratechery, talking about Xiaomi’s (and others’) rapidly growing customer base in India & China:

These customers are not conservative, or even pragmatists: they are enthusiasts and visionaries who simply don’t have very much money. The proper way to reach them, then, is not to sell them trickle-down technology: they will see right through that, and dismiss it out of hand. Rather, the solution to is develop new business models – indeed, in the case of Xiaomi, a new company – that is built from the ground-up to serve their specific needs.

This, too, is a powerful opportunity: there are far, far more potential customers in developing countries than there are in developed ones, but just because they don’t have much money does not mean they are technological laggards. Indeed, many of these customers are even more advanced when it comes to being mobile first because of the lack of a PC legacy, and they will embrace a brand that lets them live on the cutting edge.

Specs and feature bullets are still huge here, and not just because so many people in India work in the technology sector.

Android is Different Here

That central point, that people want tech but simply can’t afford the flagship devices, means that manufacturers need to find creative ways to manage cost while still offering an overall solid device. This creates subtle differences in user experience that go beyond the speed or feel of the device.

For example, Rs. 10,000 (~$160) is a common marketing cutoff for “inexpensive” smartphones (all phones are sold unlocked and contract free here).[footnote]Contrast that to a flagship Samsung Galaxy S6 Edge, which runs for about Rs. 53,000 (~$833), or an 64GB iPhone 6 which runs around Rs. 56,000 (~$880), though on sale at Amazon for Rs. 43,499 (~$684.49) right now.[/footnote]

So, I decided to pick up a Micromax Canvas A1, which cost me about Rs. 4800 (~$75) fully unlocked, got a SIM for it, and have been using it as my ‘free time’ phone when I don’t need work email or iMessage.

It’s not a bad device – performance is fine – and because it’s an Android One device, it runs the latest version of Android. It does have some limitations that impact how we need to develop apps.

First, it’s a 3G phone – no LTE or HSPA+ etc. Even though LTE is coming online here in the cities, the state of the networks, plagued by inconsistent coverage, make this a less significant limitation. App developers already need to think about bandwidth as a precious resource. I’m used to seeing my iPhone on 2G/Edge networks regularly, and I’m not alone judging from our user reports.

The bigger surprise was the limited internal storage. I always found Android fans’ focus on SD card slots odd, but now that I’m running a phone with just 2 Gigs of internal flash memory, I suddenly understand. I went to download an app yesterday and got an error that I was out of space. And all I had on the phone were about 10 reasonably sized apps (the biggest was Facebook, at 150 MB(!). Android does let you move apps and some data to the SD card, so moving apps (when you can[footnote]So, I have an SD card installed, but not all apps allow themselves to be moved to the card (it’s a developer choice to allow this). Why wouldn’t apps allow this? Because moving to the SD card disables things like the app’s widgets. So… you need to focus on app distribution size, then you have to think about the user experience when the app is on the SD card, and parts of your app just stop working. Bottom line: not convinced SD cards are that useful, and rumor has it that Google is moving away from them anyway…[/footnote]) is the only way to continue installing new apps without deleting other apps.

There are even more subtle issues, some of which are detailed in this blog post at NextBigWhat. It’s a good summary of some of the challenges if you want to target the broadest swath of Indian mobile users.

Of course, you do get what you pay for. While things like Twitter and Facebook run fine, the screen isn’t as nice as the flagship phones. My main gripe, though? The horribly inferior camera.

Photo taken on my Canvas A1:

Canvas A1 Sidewalk Shot

Photo taken on my iPhone 6:

iPhone 6 Sidewalk Shot

No contest, IMHO.

I’m still learning a lot about India, and writing software for Indians (and the world outside the U.S. more broadly – Cricket, rugby, & F1 are global sports). We’re getting close to a few new releases at work, so I’m sure I’ll have more to learn and share soon. And there’s still other things – missed call marketing, the SMS market in India [footnote]Every company seems to send free text messages for everything. Seriously.[/footnote], and so much more.

Why we stopped using Trello, even though we love it

I tweeted this earlier …

… which prompted a few people to ask, “Why’d you stop using Trello?”

The answer is pretty specific to us and our particular organizational inertia (such as it is for a small company like us), but here it is:

We liked Trello a lot, but we ended up switching to use Github issues. While it’s somewhat inferior to Trello, it had two features that made it compelling.

  1. We use Github for source control, and Trello really doesn’t integrate with that workflow at all. For example, we can manage Github issues from our commit messages, reference them, and comment on them in a place we already have to look.

  2. We use Campfire, and Trello didn’t have any integration with it. We could’ve built that, but Github already has it, and so laziness won the day.

In truth, Github issues is pretty nice, too, so it’s not like we’re giving up that much.

I did like Trello’s visibility and the visual metaphor. It also is a lovely app. The other great thing about it was that it was easy to throw it up on the screen and use during meetings. Github issues (or any bug tracker, really), is merely OK projected.

Anyway, all of this might be moot now. Someone did the work we didn’t want to do, and built a service that integrates Github & Trello. We may have to take a look at this. 🙂

Sensors, LEDs & iPhones, oh my!

Over Christmas, I hacked together my first hardware/software project. It’s been a long time since I’ve picked up a soldering iron, let alone built something worthy of sharing. It turned out to be a fun little project.

Cause, effect, agency

I got the idea to do a project over Christmas while looking for toys for my son for Christmas. I wanted to find something that would teach him simple cause and effect relationships where he could cause something (e.g. clicking red & blue blocks together) that produced an observable effect (e.g. the blocks changing color to purple). I hoped that I could prime some interest in science. I also really wanted to instill a sense that he can make things happen for himself.

For a two year old, I basically came up empty.

But for a kid slightly older, we’re living in a golden age of hackable creativity. We have 3D printers that are slowly becoming affordable. The Internet makes finding (and sharing!) instructions on building everything from customized furniture to undersea robots easy. Open source and community based tools are getting cheaper and easier to use every year. Several businesses have grown up offering easy instructions and tutorials. (Come on, these look cool, don’t they?)

So, I decided I’d use the four-day Christmas long weekend to hack together a hardware prototype (with help from my wife’s nephew).

The project

For the project, I set out to build a simple thermometer & barometer that I could check from my iPhone. I also wanted it to have some visible indicator that would be fun to look at so my son could check it. As a beginner, I also wanted something I thought I could pull off.

My project centered around an Arduino microcontroller board. An Arduino is an inexpensive open source “electronics prototyping platform” that can be programmed using nearly any computer and a USB cable. Because it’s cheap and freely documented, people have hooked up dozens (if not hundreds) of sensors & other electronics to it.

I had an old kit laying around that I rediscovered after I returned to Fanzter and started working with our resident hardware hacker extraordinaire, Josh. I recommend starting with a starter kit if you’re just getting into electronics projects. You can get several decent options from Adafruit, MakerSHED, or Sparkfun. I have an older Sparkfun Inventor’s Kit, but any from these three vendors will do.

You should go through a few of the tutorials before trying the rest of this to get familiar with the basics of Arduino programming.

Here’s the full parts list:

Tools:

In addition, if you want to get an app running on your iPhone, iPad, or iPod Touch, you’ll need to have a developer account with Apple.

The build out is really simple. The BLE shield snaps onto the Arduino board basically extending the pins and sockets on the Arduino through itself. Just make sure you line up all the pins and sockets. More details at RedBearLab if you want them.

For the LED matrix and the temperature sensor, there’s a little soldering involved. For both the soldering and the basic wiring setup, I followed the instructions in Adafruit’s tutorials:

Make sure you position the LED matrix correctly before soldering it. I got multiple warnings about that from people.

Here are the results of my soldering job:

Arduino project images

I’ll admit, I’m proud of how well that came out considering it was my first soldering project in 20 years.

Arduino project photo The only difference in my final wiring from the two tutorials is that I hooked the matrix CLK and DAT pins to the same rows containing the CLK and DAT lines from the Arduino to the temperature sensor. In the picture at left, those are the green and orange wires (Click through for a larger view). This works because they both speak a protocol called I2C and have different addresses. [1]

For power and ground, I used the breadboard instead of hooking the sensor or backpack directly to the Arduino board. This is standard, and what the kit tutorials encourage. Just thought I should mention it, since it’s not directly mentioned in the two Adafruit tutorials above.

The next step is programming the Arduino. Rather than walk you through all the details, here’s the source code. Feel free to fork the project and mess around. I’d appreciate any bug fixes if you have them. To use the source code, you’ll need to install the Arduino software & the Ino tool. I used Ino so that the github repository would have everything you need. To run the project, launch Terminal, then type ino build and then ino upload to get the project onto your Arduino. If you want to see the serial output, you can use ino serial -b 57600 to get that on your terminal screen.

I also have the iOS code available if you’d like to play with that. You’ll need to be comfortable with iOS development to use this. I may submit a version to the store if there’s enough interest. Let me know.

That’s it. The finished wiring looks like this:

Arduino project images

When lit up, it looks something like this (only 2 readings are displayed – normally there are 8):

The arduino end of this, the simple temperature station.

The iOS app is really simple:

Weekend hack: arduino weather station talking to iOS app via BLE. Boom.

Drag up to trigger a connect or disconnect. Eventually, I’ll add a pull down to trigger a temp refresh. Otherwise, it polls every minute.

Known issues

The code isn’t perfect and, as I get free time, I’m still cleaning up a few things. Here are some known issues:

  • Bluetooth reliability: For some reason, the iPhone doesn’t seem to disconnect and/or reconnect properly to the device. Pressing the reset button on the BLE shield usually fixes it, which makes me think there’s something wrong in my code.

  • Memory usage: So, the main challenge programming an Arduino is that the device only has about 2K of RAM for the sketch. Yes, that’s two kilobytes. It’s a challenging environment when I’m used to phones that have 256-512MB RAM (or more). My code is definitely not particularly optimized. The program did run out of memory regularly. I think it’s stable now, but it’s not as good as I think I can get it.

Next steps

I’m going to try and hook it up to a Raspberry Pi and put it in an weatherproof enclosure so I can leave it outside. My other goal is to change the LED Matrix to an LED strip like this so I can make it look like an actual thermometer.

I’ll update this with photos if I get that far.

Hope that helps someone out. It was a fun project, and I’m looking forward to working on this more.


 

1 I2C is a simple two-wire interface to hardware components. I2C allows the Arduino to control multiple devices over just two pins. The Wikipedia page has the gory details, but just know that each device has an address which has to be unique, and then you just wire them up in parallel. The LED Matrix backpack that Adafruit provides provides an I2C interface to the LED matrix, and the Bosch sensor comes on a board that also speaks I2C, so all the work is basically done for you.

That’s more detail than you probably need, but I thought it was neat.


Update: Two corrections above, both minor but notable. I accidentally described an Arduino as a microprocessor instead of microcontroller, but then Josh pointed out that it’s really a whole platform because the microcontroller is the specific chip at the heart of the Arduino. It’s a significant detail when you get more advanced because different versions of the Arduino might have different microcontrollers at the heart of the platform.

The other is how I described the I2C wiring in the footnote. The sensor and the LED matrix are wired in parallel, not series. I had a feeling that was the wrong word, but forgot to look that up. Minor detail, but again significant for deeper understanding.

Sorry about both of those. They’re fixed above.

Proxigram now supports Flickr

Quick update on Proxigram: it now supports Flickr, Yahoo’s popular photo sharing service. If you’re a Pro account holder, it will even get realtime updates from Flickr, just like Instagram provides.

The “point” of the app has changed, too. The goal is to build a single API endpoint for all of your photos. While the photos will still be hosted on their respective services, you can now get one read-only API to see a normalized view of them all.

The project is still open source, so if you’re looking for a sample node app that connects multiple third party services via oAuth (using passport.js), you can get the source on Github.

Facebook support is coming next. If you want support for your favorite photo services, please let me know what you want or, if you have the ability, submit pull requests with patches.

I’ve also written a few bits of supporting code for this. I abstracted out the basic PubSubHubbub verification calls into a standalone library: node-push-helper.

I also have stubbed out a new node Flickr client. I made a new one because I wanted to use oAuth instead of the deprecated Flickr authentication methods. After trying to retrofit one of the other libraries, I decided to just start over. I may merge this back into one of them, but for now, expect new functionality in the coming weeks. Here it is: node-flickr.

Would love code reviews and criticism from node experts. I know the code can be better.

Proxigram – a sprint using Node.js, Express.js, & the Instagram API

I’m happy to share a little experiment I played with this week. I needed to take a look at Node.js & it’s family of technology for a project but found it hard to find good explanations of best practices, etc. There are a half-dozen competing boilerplate/template samples that have very little in the way of explanation or comments. So, I decided the best way to get familiar with the nitty gritty of building a Node/Express app was to write one.

I decided to solve a simple problem I had. I wanted to get my recent photos from Instagram onto my blog. I wanted it to be a simple JS call or plugin, and I wanted it to be smart about storing keys for read-only access to my Instagram account. It seemed like a simple proxy for the Instagram API would suffice. The OAuth credentials are stored on the proxy and a new, non-Instagram specific key gets embedded in the JS.

And thus, Proxigram was born.

Sure, it’s a little contrived, but now that I’ve built it, I’ve got ideas for some improvements and, even better, I now have a functional, real app to share with everyone so I can get feedback about all the things I did poorly.

The source code is all on Github, both for Proxigram itself as well as the jQuery Proxigram library to access it.

The app is interesting to look at on a few levels. The package.json listing the bits and pieces I used is below. The app talks to the Instagram API, obviously uses MongoDB to cache results local to the app. It keeps that cache fresh by using Instagram’s real-time API to get updates for users. It uses passport.js for authentication (though it seems like more of the cool kids are using everyauth these days). It uses less.js for the stylesheets.

So, if you need a working example for all of those things, here you go.

Please leave feedback about things that make you itch about the code. I know a bunch of you are serious Node.js mavens, so I’m really curious what you folks think and what conventions you’re following in your projects. My biggest question at this point is how to deal with making shared components available to code in different files. For example, I made some of the authentication filters global because I split my routes up into multiple “controller-ish” files. None of the boilerplate/template apps did it better, IMHO. If you have thoughts on that one, let me know.

PS. The images on the right are getting served through Proxigram. 🙂

Here’s the dependency list from my package.json:

{
    "name": "proxigram"
  , "version": "0.1.0"
  , "private": true
  , "dependencies": {
      "express": "2.5.8"
    , "less-middleware": ">= 0.0.1"
    , "jade": ">= 0.0.1"
    , "moment": ">= 0.0.1"
    , "passport": ">= 0.1.8"
    , "passport-instagram": ">= 0.1.1"
    , "passport-http-bearer": ">= 0.1.2"
    , "mongoose": ">= 2.5.0"
    , "connect": ">= 0.0.1 < 2"
    , "connect-redis": ">= 1.0.0"
    , "connect-heroku-redis": ">= 0.1.2"
    , "airbrake" : ">=0.2.0"
    , "instagram-node-lib": ">=0.0.7"
    , "express-messages-bootstrap": "git://github.com/sujal/express-messages-bootstrap.git#bootstrap2.0"
  }
  , "engines": {
      "node": "0.6.x"
    , "npm":  "1.1.x"
  }
}

Google I/O: An unadulterated celebration of technological imagination

That mouthful is my one sentence description of Google I/O. The demo floors and tonight’s After Hours party are full of whimsy and wonder, literal playgrounds for technology geeks of all stripes.

The atmosphere at I/O is all about the possible, the future, and the fanciful. There are companies making robots, others building home mesh networks that can control all your lights, and yet others working on all sorts of crazy gadgetry. It was all very cool. I went from demo to demo filled with a sense of wonder of what the next few years might bring. Sure, there were really practical sessions about new APIs and tech, but stepping outside of the sessions put you in a geeky wonderland.

At the same time, I can’t help but see this as a metaphor for the differences between Apple & Google or Apple & Android. At Apple’s WWDC, it’s intensely about what will get done now. It’s about making connections and learning the tech that you’re going to put into your next application. Folks on stage demo apps that will launch very soon. Apple talks about features that will be available in weeks. It’s about go, go, go and very much about making money. It’s almost businesslike during the day, with fun during the evening.

WWDC is all about execution, making money, and shipping NOW. For Android, folks know it’s going to be huge, eventually, so they know they’ll make money, eventually. Every time I saw a cool hardware demo, or a neat looking app with some advanced tech, the answer I inevitably got to my, “When can I get it (I really want it!)?” was “fourth quarter this year” or later.

If that doesn’t summarize the state of the two app markets, I’m not sure what else does.

I’ll be adding more photos to my Google I/O 2011 set as I get time. I have some great photos of some of the demos at the party and around the daytime demo area. You can see what I mean for yourself.