All these internet things and devices, they track me in so many different ways, check it out:
– I use Google Latitude’s location history so I can see where I was at any given day or time for the past few years (“On Oct 16th, you were at Barcade at 11pm”),
– I track all my card expenses on Mint to figure out what I spend the most on (“You spent $50 on Twinkies this month, you are over budget”),
– I use RescueTime to track what applications and websites I use and figure out my productivity levels throughout the day (“Your productivity dropped from 78% to 25% after lunch”),
– I record most of my bike rides and get cool stats (“You’ve ridden 150 miles the last two weeks at an average speed of 11 mph”),
– I tried recording what I ate everyday, my weight and my exercise for some more stats about my physical body.
– I’ve managed to archive most of my files and pictures for the past decade or so (“My Documents\My Pictures\2002” lol windows) and I’ve been slowly moving all this to the cloud.
– For a while I programmed my phone to record in which room of the house I was in during the day (“You sat on your ass 90% of the day, congratulations.”).
I don’t really care that all this info about me is out there on the internet as long as I can access it and keep it private (facebook just barely makes the cut). The tools and the benefits for me far outweigh any privacy concerns. What’s the worst that can happen? they target ads on me? oh boy… I’d say the more data I can extract, record and graph about my life, the better (or the cooler?). What if one day we’d be able to upload electrical activity of our brain 24/7, then algorithms could do their thing, correlate that shit and get stats about our thoughts? (“Last week you were 45% sad, 24% lonely, 60% horny, and 30% of a dumbass.”). Obviously I already know these things from personal experience and a journal would do just fine, but effortless recording of the variables of life always seems so damn cool, like a memory extension for your brain and the accompanying thought process to analyze it (on a website/app). Actually what I’m really waiting for is the ability to upload my complete mind and consciousness to the internet of tomorrow, when it evolves into our hive-mind’s neural network. Or maybe
…or maybe I’ve been spending too much time on the computer lately. blah. too much work, too much internet, not enough real life. yea this is going to be a long winter… I remember a time not too long ago when I didn’t track a damn thing except for the sun in the sky, where I was going to sleep that night and provisions for the next day. No phone or computer, just people. life was good, I didn’t need to track down and analyze stats on how to be happy. I already was.
But hey it’s not a complete bummer. I do like to work and make things on the computer. I just get sucked in it way too much sometimes and kinda forget I’m a human being. No but really, aren’t we all just data? Our thought processes, experiences, brain wiring, everything can be represented as data, we just haven’t figured out how to read it all. OMG I’M JUST DATA. ok back to computerland.
From Mashable: “MessageParty, an early-stage YCombinator-funded startup, takes the classic concept of a chat room and adds a geosocial twist by making any chat room location-aware.” Here’s the TechCrunch link.
Not a new idea. But the app is actually pretty cool in its simplicity (though still rough around the edges), and I found the video absolutely hilarious and illustrates the new world we live in:
Ok cool… hacking time now. I’ve been really interested in real-time mobile stuff lately, so I was curious on how this service works. Enter Wireshark (formerly Ethereal), a packet sniffer. So I connect my iPhone to WiFi and started looking around… they pass around the messages via plaintext HTTP in JSON payloads… looks like the client polls the server with a GET every few seconds. Simple enough… Ok let’s set up a filter “http && ip.addr == 220.127.116.11” (that’s their server’s IP) and take a better look:
Simple enough… they’ve got some kind of ruby app up behind the scenes, the client GETs /rooms/:id/roommessages.json to get the new messages, and POSTs to /roommessages.json for outgoing messages. The JSON payload just basically has your profile pic url, your user_id, the room_id and the message you want to send.
Fake the headers, fake the JSON payload, cuz they be faking everybody out there … and voila…
I don’t mean to be an ass, but this just ain’t gonna fly… I didn’t check if they rate limit, but you can pretty much spoof anything in there… I know it’s a very early version, but common guys… you got YC funding and tons of press.. surely you could have done better for the first version?
Let me walk you through my exploration of these new technologies as I get acquainted with them. The objective will be to display geo-tagged tweets around the world in real-time on a mapview, with the profile pic and tweet info on the annotation (see screenshot below). This can be accomplished in just a few lines of code, less than 100 with the help of some cool new frameworks and libraries.
There’s a lot of buzz surrounding the real-time web nowadays, which involves pushing data and events from servers straight to clients just as it happens. There’s plenty of ways of achieving this, both on the browsers (websockets, long-polling, etc), and on mobile devices (sockets, AMQP clients, etc). The catch-all term is Comet.
Here’s a stackoverflow discussion on some of the options on the iPhone. I tried using the STOMP client they mention and setting up an Apache ActiveMQ server (a AMQP) but ideal configurations proved hard to come by. Basically I set up a “topic” which is used as a one to many kind of broadcast, but messages were waiting for an acknowledgment from the phone and everything just started to lag for everybody. I’m sure this can be set up properly, and there’s other AMQPs I want to try out such as ZeroMQ and RabbitMQ, but it was just a quick test so I didn’t look too much into it.
Anyways, there’s plenty of info around on the web about comet and push. What I’m going to try to do here is walk you through my first exposures into node.js and Appcelerator Titanium Mobile to get a real-time mashup of tweets on a mapview:
Twitter Streaming API
They’ve had this API out for a few months now, so there’s plenty of libraries in a bunch of languages for easily accessing it. It basically involves keeping an HTTP connection open to their servers and continually receive data through that pipe. Twitter is beta testing User Streams, which is more suited for user twitter clients. Both of these APIs should eventually help out twitter with their load issues since constant polling by everybody can get pretty heavy. Here are some more advantages.
For the purposes of this demo, and since it’s all I have access to, we’ll be using the general purpose public streaming API. You’ll need a twitter account to be able to use the API, and they are currently using HTTP Auth on it (outside of their HTTP Auth deprecation schedule this month). The default access role grants you 10 boxes of one (lat/lon) degree, which is basically the size of a city. You can request the “locRestricted” role which allows 200 boxes of 10 degrees each. This almost covers the entire land-mass of the earth. They don’t have any way of just querying all geo-located tweets, not even with the firehose access role, so you have to construct your boxes yourself (I checked with their support).
Just copy paste the array from the output, and stick into the query below…
It’s all the rage now, and for very good reasons. Event based non-blocking stuff is just so awesome. I haven’t done much with node.js yet other than this demo, but hopefully I’ll get to use it more soon enough. So basically just go ahead and install node.js, it’s a super simple install. Naturally there’s a twitter streaming library for node.js, called twitter-node. Here’s the github page. Go and clone that somewhere. (I haven’t explored node.js package managers yet). Be sure to run the build script they have in there to install the base64 library you need. Grab the boxes array from above and put it in there for the location query (for some reason I couldn’t make it to 200 boxes with this library).
So basically what we are going to do is create a socket server with node.js on port 6969 and for every new event the twitter library sends us, we’ll go ahead and push that to the socket, which in turn will push it out to all the clients currently connected. I haven’t figured out how to close the socket properly if a client was an asshole and didn’t FIN, leaving the socket in a sort of limbo state. I don’t know if this even matters, but basically an exception will be thrown for each of those limbo handlers, each time we try to write to it.
That takes care of the server side… now to the client side.
Appcelerator Titanium Mobile
After trying most of them out, Appcelerator completely outshines them. Rhomobile is cool in that it’s ruby on rails-esque, write once, and push out to FIVE different platforms (iOS, Android, Blackberry, Windows, Symbian), but it’s just too slow and fugly (not completely native).
Anyways… after you get everything set up with Appcelerator, go ahead and create a new project in Titanium (iPhone or iPad, don’t matter) and put the following code in your app.js. We are just adding a mapview to the window, and some buttons to control the socket connection. When the ‘read’ event on the socket gets triggered, meaning a new JSON blob came in, we’ll parse that and create the corresponding annotation for the tweet. We can easily add a remote image to the annotation for the profile picture using Joe Stump‘s awesome tweetimag.es service.
That’s basically it. There’s a few bugs, but this isn’t about perfection, just a quick sample of these new technologies. Scary as it may be, this is pretty much where we are headed towards. A constant flow of real-time information, following us wherever we go. Can’t wait!
Is this particular application helpful? Maybe. I could see it being used to monitor emergency situations, or some type of big event. Problem is that very few people yet geo-tag their tweets. The average number of normal tweets was 750/second a month ago. Compare that to maybe 10 tweets a second of geo-tagged tweets…
With the access role I have and the boxes I use, I’m getting about 5 tweets a second on the map. Don’t go thinking that the iPhone can support that many total annotations on the map like in the screenshot above (it’s the simulator).
They have yet to add socket support to the Android side of Titanium, so as of now this won’t work on the Android (though it’ll probably work without modification once they do).
A bunch of JSON parsing exceptions happen on the ‘read’ event of the client socket. I imagine it’s because more than one tweet might come in the JSON payload per packet or whatever triggers the ‘read’, so it freaks out.
The bugs I encountered were that the pin drop animation isn’t working in the latest version of Titanium, though I know they are on it. Also if you terminate the app it’ll be the asshole I mentioned above and not send the FIN to the socket. A curious observation is that the socket remains connected when the app goes into the background state (probably just sleeping or something).
Please feel free to comment below any best practices on this (I’m a newb), or any questions you may have.
So I kinda vowed to myself not to fall in the trap of making a blog, doing a first post-post, and that be my only post for months. I don’t think I have much to say, so filling up a blog with constant mind vomits might not come natural to me yet, but I’ll try.
I’ll start off by expanding on where I left off in my ever so inspiring first post, overview style:
By programming stuff, I mean I’m going to talk about whatever little code snippets or topics I find interesting. My background has been in the web and unix world; PHP / MySQL and then Ruby on Rails. Now I’m jumping into mobile, mostly with Cocoa stuff on the iPhone/iPad. I’m currently Lead Developer at Social Mobility Inc.
I’m still recovering from the shock of having a whole computer in my pocket, that is always connected, and that I can easily program. Did we have any of this even 5 years ago? I’m especially thrilled about the fact that there are gazillions of these things out there, and how mobile is quickly changing our society. I’ve always tried to stay away from fanboi-isms, but right now Apple is pretty much plowing right through the restrictive mobile space and making room for us developers. We then get great competition, and move mobile even further.
I’ve been learning Objective-C for almost a year now, and the amount of fun I’ve had along the way has been well worth it. Little did I know I was actually gonna enjoy the hell out of a CS degree. (I’m pessimistic like that). The language itself seemed kinda terse at the beginning, but the whole cocoa API is pretty rich and stable; considering that A: it’s a f-ing phone, and B: it’s been only 3 years on the iOS platform. Still, Haters be hatin’.
I’ve been interested in location based services lately, doing some things with the SimpleGeo API. I’m having tons of fun in this area, mostly because nobody even knows what works yet in location. But whatever we come up with, it’s probably going to need a heavy-duty platform tailored for geo stuff. What simplegeo is doing is pretty amazing, and a lot of the tools are actually open source on github. They have clients for ruby, Java, Python, PHP, Objective-C, .NET, etc so I’ve been covered from project to project.
I’m also very eager to see what people come up with for Augmented Reality. We are still in a kinda clunky phase right now in AR, but you just know that mobile computers are going to keep avalanching and be more and more ubiquitous. I’m waiting for wearable computers to be considered cool. Just as location, I find it interesting because nobody knows what’s next.
Nintendo did some AR things with the new 3DS, as it has two front cameras and a 3D screen. Also, some of the iPhone 4’s biggest upgrades were tailored specifically for AR. Things like better location, the gyroscope, the raw camera access. I’m expecting to see some pretty immersive things coming soon.
So anyways, I’m just glad I’ve found a shitload of interest in something and hope to share it with this blog, so stay tuned.
Like every other programmer out there, I do some photography as well. I mostly do things like HDR, light graffiti and panoramic pictures to overcompensate my total lack of photography skills. The results sometimes end up not sucking so much, so I might share those things, like this little planet in South Africa:
Ok, I’m adding too many topics to this blog already. Let’s go back to programming. YAGNI -fu.
There may be some random things I would like to share. Probably not, but you have been forewarned.