Friday, October 3, 2008

Prog-iona

Some how the Progress acquisition of IONA slipped under my radar. Funny, because that brings many things on my periphery full circle.

I used to work for C-bridge Internet Solutions. eXcelon bought C-bridge after the bubble burst in 2002. Fast forward to 2005 and I'm in Portland working for a now-defunct company trying to break into the ESB space via an open model. That space leaned towards JBI. I'd still argue that we bet correct and that JBI has done a whole lot of not much, offering "vendor independent" layers on top of things that people didn't care about vendor independence for, but in 2005 we gave it a serious look.

LogicBlaze used to be one of the primary FOSS backers of JBI and from the looks of the mailing list, their core team continues to be heavily involved in the development of ActiveMQ on which their JBI implementation was based.

By April 2007, LogicBlaze was acquired by IONA. Around that time, Mike started talking with Larry who was our boss in the latter part of the eXcelon days. Mike and I were both at LM, and we regularly worked with Stephen who doesn't blog (sorry no link) but formerly worked for IONA. Mike's talks with Larry didn't go anywhere, but the IONA/C-Bridge/eXcelon/Progress/LM circle was complete.

Now, last month, Progress buys IONA and Larry again works for Progress. I'm also again doing a deep dive into ActiveMQ, whose IP is now I guess owned by Progress who I used to work for. So in some sense, the company who owned my butt as far back as 1999 is still influencing my role today.

Add to that mix a couple of interactions via TSS, OSCON and IRC with Hiram and James (which they probably don't recall but did result in some attribution), and things get even more tangled.

It never fails to amaze and amuse me how small this business is, despite being so big. The networks I started making 10 years ago still come into play even today in subtle ways and keep overlapping. Six degrees of separation is nothing, I have trouble keeping it to two.

Friday, August 8, 2008

Classes of iPhone Users

I've had an iPhone for some time now. While it does bother me that Apple doesn't really give two shakes about contributing back to the open source community on whose shoulders they have built, I'm also pragmatic. The device is simply the best of its kind out there.

I like to consider myself a student of other iPhone users. Any time I see someone with one, "classic" or 3G, I like to snoop on what they are doing. Over the last few months, I've noticed a few classes of iPhone user emerging. What follows is my untrained attempt at social anthropology mixed with humor.

  • The Frustrated Designer - a Mac user through and through. They always buy 1st gen Apple hardware, even though rational people know better. Most easily identified by their propensity to verbally express a tangible frustration attempting to use 97 nested DIVs, advanced CSS selectors and 12578 lines of JavaScript to center an image in the iPhone Safari interface. Would write an ObjectiveC native app, but doesn't know what a compiler or socket or ObjectiveC or a native apps is.
  • The Trophy - doesn't really know why they have an iPhone, just that their wealthy partner gave them one. Most easily identified by well-manicured nails attempting to tap out a text message with the sound at full volume for each key press. Never types faster than twelve keys per minute. Commonly sits next to me on airplanes.
  • The Showman - Doesn't really know why he needs an iPhone, just that he does. Identify the showman by his inability to do much other than talk on the iPhone, set it on the table during meetings so you know it's there, and by the fact that his emails still end with "Sent from my iPhone" despite having the device for nearly a year.
  • The Tech Luminary - Installs every possible application from the iTunes store hoping to gain status as someone qualified to comment on the state of iTunes Store/iPhone technology while at the same time not knowing a thing about the underlying platform. The luminary mocks Windows users for arbitrarily installing shareware from .ru domains, but never questions why he has installed 47 iPhone applications, 43 of which are never used.
  • The Frustrated Developer - Has an iPhone application in the works, but can't talk about it due to an unenforceable NDA. Identify these users by their regular and sanctimonious references to their NDA despite never having really finished an application or by their smug looks at local user groups.
  • The Marketing Director - Only knows that they should have an iPhone because their underlings said so. Loves the speaker phone, thinks it's the major advance of the device. Most easily identified by the fact they are still running the same firmware they bought the device with.
  • The Commuter - Found in urban areas, this individual cannot stop texting, browsing, or listening to podcasts long enough to exhibit any awareness of those around him/her and actually move out of the path of egress for everyone else on the transit device. Likely to accidentally enter altercation with local meth addict while engrossed in something Twitter-related.
  • The Power User - The most elusive of the iPhone users, understands device limitations (most Bluetooth profiles lacking, no Copy/Paste, no Search, AT&T partnership) along with a healthy distrust of anything Steve Jobs claims. Generally refuses to tout the virtues of the device unless overhearing someone rant about how much better a Nokia or RIM device is. Understands there are no better alternatives and that using the device sticks it to "the man" by making consumer opinion matter again. Hopes Android kicks major ass and that people will take Openmoko seriously.
It's probably obvious that I associate myself mostly with the latter category, but there is plenty of commuter in me as well.

There is a snobbery around having an iPhone. It's sort of like riding a fixed-gear bike in Portland; you feel like it makes you cool, you usually go out of your way to let people know you have one. Then you see all of the other DBs using one and realize its way too much machine for them. Some of them even run into walls while using the device you admire. You can't give up your baby, so you're left trying to quietly blend back into the masses hoping nobody noticed and that you don't get associated with the new user population. I'm sure there's a Gartner graph to prove it all out somewhere...

Wednesday, April 16, 2008

I Can Ride a Bike With No Handlebars

This post on slashdot blew me away. I'm typing this on a Hardy/Kubunty alpha (maybe beta by now?) installation and it has been more difficult to install than several of my Gentoo installations - that's saying something.

Granted, I went with KDE4 which is simply asking for problems, but really, this was a doozie. Issues of note:

  1. Now famed Ubuntu "Black Screen of Death" - I attribute this to my having some slamming video cards, but seriously, the people's distro should *never* get itself into a place where you can't fall back to the kernel fb. When good ole' Ctl+Shift+Del can't save you from X+driver maddness, something is fundamentally wrong. For the interested, I had to drop to a recovery console and install nvidia-glx-new, then hack xorg.conf to use nv drivers, then "activate" properietary nvidia drviers. Not pleasant.
  2. Java UIs fail - mouse-over for certain Eclipse SWT widgets cores java. That is generally difficult to do - SWT is one of the more stable Java UIs out there and it dies *hard*.
  3. Probably most important, my shell gets into some wierd sound death-trap. Yes, that's right, I said shell and sound are problems. Alarm bells going off for anyone yet? Basically, the DCOP issues on the Ubuntu forums don't seem to be fixed for me anyway, my shell is trying to do something sound related (even with sould off) and hanging until I start a corresponding shell to recover the first. The scenario looks something like - start System -> Konsole -- see DCOP error message, see Konsole hang. Start System -> Konsole 3 -- watch Konsole work while Konsole 3 hangs. The two seem to be fighting for control of the KDM and one but not both work at a time. I spend ~40% of my compute time in a shell. When something that fundamental doesn't work, I freak out.
  4. FF3 widgets look like crap. That's not really Hardy or KDE4's fault, but it's true. Tabs, drop-down decorations both look simply aweful.
In all fariness, Gusty didn't install properly on this hardware. Still, I'm not seeing how things in XORG land have gotten any better, infact they may have gotten worse. Complain all you want about Windows, at least Vista on the same hardware displayed a UI without me dropping to a rescue console.

Sunday, March 16, 2008

Stale Cake - With Frosting!

Funny comments by Microsoft product manager Lawrence Liu on Mike Gotta's blog. Removing some noise between the lines:

"I am asking customers to step back and assess what business problem(s) they’re trying to solve... Jive and IBM are trying to wedge ... themselves into the SharePoint 'pie' by focusing on feature-to-feature comparisons while we’re working hard with our partners to provide the right frosting (or Cool Whip) to solve our customers’ problems."

I'm confused by Lawrence's response here for a couple of reasons:

  • Bad metaphor is always difficult to read - Who puts frosting on pie? Who takes a "wedge" of pie? Good pie is best served hot by a knowledgeable baker, not after the original baker has stepped away from the pie and waited for six months to see what other pie makers are selling, then asking some other pie maker to fix the six-month-old, stale pie.
  • Certainly nobody is asking customers to *not* solve a business problem. That would be equally as insulting - Microsoft does not have a monopoly on fixing business problems. If anything, they have a monopoly for causing business problems (early EOLs for XP, patch practices requiring substantial outages and reboots, Vista protocol design that causes networking headaches all translate into BCP issues in my mind).
  • More comical, Lawrence is trying to convince customers that alternatives to *their* products are rash and that buyers are *safer* sticking with the platform and ignoring what other vendors are doing in the space. In my mind, this is akin to "only following orders", but why take orders from the used car salesman at the end of the block? Sure, eventually Microsoft will figure out what is important for their most profitable customers, or alternatively customers can buy a number of flexible solutions today that meet their needs.

My translation: "We're Microsoft. You will do what we say is important, when we say it is important. Thank you, your next invoice is in the mail."

The term "Microserf" comes to mind...

Thursday, March 13, 2008

Erlang Musings

Completely bogged down buy our 2.0 release push at work, generally not a lot of time to blog these days. That combined with the Twitter micro-blog phenomena and I'm neglecting this blog. Sorry for the two people who read this :)

I did want to capture some thoughts on the recent discussion on the Erlang mailing list around Damien Katz's post about Erlang.

To preface my comments, I've been using a decent bit of Erlang over the past few weeks and have been learning the language for several months now. My current endeavors are in the distributed load-testing arena (Tsung didn't meet my needs). I'm by no means an expert and have nowhere near the experience of Damien or the members of the Erlang mailing list, but I'm beginning to feel like I'm not a total n00b.

Initial thoughts on Damien's post:

  • I've also found refactoring a bit difficult. Over time, defining an entirely different function to elicit a slight behavior change generally results in some parameter munging then a call to an existing function. Problem is, in development, the behavior of that existing function often changes thereby affecting all the calling functions.
  • I was also intrigued by the response to Damien's post on the Erlang mailing list where I lurk but rarely post. Some highlights:
    • "sometimes giving a whining child a lollipop is a rational thing to do." - Richard O'Keefe clarifies this comment in subsequent posts, but the condescending attitude towards other languages continues to discredit him IMO. I generally get this sense from the Erlang community - "we know functional and you don't, so don't bother us with your critique of our superior language." Not sure why early adopters of most languages feel a need to succumb to Napoleonic urges and lash out at successful platforms. Java bashing is so very original these days...
    • Robert Raschke also jumps on the Java-bashing bandwagon but with less valuable critique other than that he doesn't want to write Java in Erlang - not sure who asked him to do so.
    • As usual, Ulf comes through with solid, valuable feedback.
    • My experience has been similar to NAR's and I related to his comments for the life of the thread. In particular, defining dozens of one-line functions has made flow control difficult to follow in some circumstances. Probably my lack of understanding towards the language idioms but building on Damien's points, there aren't consistent examples of those idioms so it's easy to lose your way. I stand by my claim that Wide Finder proved this for file I/O in Erlang. It shouldn't take a committee to help you write good file handling code.
    • I found Alpár Jüttner's comment interesting: "I think, the major obstacle for newcomers is not the syntax, but the immutable data structures". As a newcomer, this was not an issue for me at all. In fact, having spent more time than I care to remember worrying about dirty reads of shared memory, this is a welcome addition to my programming routine. Rather, I struggled with multi-byte strings (referring people to ejabberd is not sufficient documentation for an entire language), function proliferation, syntax ( =:= ), and most importantly lack of libraries.
      • Robert Raschke's clarifications as line endings was a huge help here.
      • The fact that POST'ing a form with http:request remains an enigma (eh, where are my param encoding functions?) really makes me feel like the language is a DSL for networking gear. Yes, Erlang is more expressive and concise than Java, but that doesn't mean much to me when I can write a Scala function to make an HTTP POST in four lines of code and have it run screamingly fast where as the same Erlang takes two dozen lines (and based on the documentation is questionably not process safe). I have the sense that the Erlang community would happily provide me with 10 different functions to solve the parameter encoding issue, but that misses the point - I don't have to worry about such things in most other languages I work with.

In summary, I actually agree with most of Damien's original post. Having written a good bit of C/C++/Java/Python/Ruby, his observations are generally representative of how I feel approaching Erlang from that perspective.

Like Damien, I have a lot of issues with Erlang. I kind of hate Erlang. I hate it in the sense that I wish I had a better, more familiar solution to programming with actors. At the same time, it's immensely powerful and expressive. Every time I use it, I think "wow, I love functional programming" and "actors are so important, it will be amazing when the programming public realizes why we all need to do this".

At the same time, the syntactical and idiomatic issues I have with Erlang constantly nag me. I'm sure these will ease over time, but I have yet to reach a point where I've stopped looking/hoping for alternatives. Given some of the attitude displayed on the mailing list, I won't really have a lot of community loyalty if given a comparable option.

Monday, February 18, 2008

S3 Outage

A bit surprised this didn't make more news, but I find the S3 outage (originally via Tim Bray) fascinating. From what I can tell reading between the lines, it seems as though a handful of demanding clients ground the service down with bad authentication requests. Makes sense on some level, auth requests are computationally expensive and in this case, they came from within EC2. My guess is that Amazon treats EC2 as a more privileged network and ultimately allows a higher QoS level between EC2 nodes and S3 nodes. So, while it may be en vouge to diss "architects", it's also important to have people around who understand the fundamentals from TCP stacks through the crypto and application layers and who build reliable systems across all three.

My prediction is that we'll see more of these incidents from EC2-hosted nodes, then growing out to bad S3 requests across the public network launched from various bot nets. It's not an easy problem Amazon is undertaking with EC2 or S3 to begin with, it's an even more difficult one to protect from DDoS and DoS attacks.

Wednesday, February 13, 2008

The Power of Simple, Online Collaboration

I won't attempt to use any fancy market-speak or terminology. I've long been a believer in how blogs and wikis can drive collaboration inside a company. One of the things Jive is really good at is capturing ROI around open collaboration in quantitative terms and telling a compelling story. Sam's latest blog is a great example. In short, based on a self-organizing set of categorizations, we can track cross-departmental collaboration and interactions. Cool stuff, not the type of data you can see from Share Point, that's for sure.

Friday, February 1, 2008

Rhino with readline - Hurray for rlwrap

Stumbled across dr.bob's sweet post on getting a readline wrapper around rhino.

Prior to reading his post, I had no idea rlwrap even existed, I'm simply amazed. GNU readline is IMO one of the most useful abstractions in all of *nix-dom. If I had known that I could use it even with tools that weren't readline-enabled, my previous lives would have been much easier. For example, looks like some foo-equipped DBAs (inspired by one of my heroes Tom Kyte) have found a way to use rlwrap to get a command history for sqlplus. It's been ages since I've actually used sqlplus (most time these days is in psql thankfully which natively supports readline), but this would be step 0 in my Oracle client install.

Anyway, back to Rhino. rlwrap + rhino is lethal. Gives me vi key bindings in rhino.

My setup ended up like so:

* Installed Rhino to ~/java/rhino
* Created script in ~/bin which was already on the path
* chmodded said script
* Added vi editing mode to ~/.inputrc
* Edited rhino commands with vi bindings
* Lived the good life

The various files looked like so:

~/.inputrc
set editing-mode vi

~/bin/rhino
#!/bin/bash
for jar in `find ~/java -type f -iname '*.jar'`; do
export CLASSPATH=$CLASSPATH:$jar;
done
rlwrap /opt/java/current/bin/java -cp $CLASSPATH org.mozilla.javascript.tools.shell.Main -strict "$@"


Drop an exploded rhino anywhere into ~/java, and you're good to go.

Of course, GNU Readline supports hundreds of options including other bindings like emacs if your are so inclined.

Thursday, January 31, 2008

REST Sprinkled into my XMPP

Been doing some low-level IQ packet handling in XMPP lately and interestingly, RESTful philosophy seems to be influencing me from afar. We started out the packet design to send a list (roster, but not an XMPP roster) of users associated with an artifact to the remote replica. The initial protocol design had elaborate Add Document, Add User, Remove User, Delete Document packets with corresponding XMPP IQ responses and errors to each. It quickly became apparent that there wasn't really a need to actually have different packets for Add User, Remove User and even Delete Document. When we started envisioning the resource as a document, and simply hammering the list of users along with the document, much the way we would expect an idempotent PUT to work in REST, the entire protocol got much more simple - just send the list every time. When there are no users, send an empty document packet. Otherwise, just send the authoritative list. That's much easier than the book-keeping needed to track if I've delivered a notice for one user and if it's been acked - just send the whole thing every time.

Even better, if the destination misses a packet, the sender doesn't care, it can just send the current one because the new packet is authoritative at the time it's sent.

All said and done, this eliminated a ton of state maintenance on both ends of the protocol. The sender didn't have to remember what it had fired off to the recipient and if it was adding or removing someone, nor did it need to manage the corresponding ack/error packets in detail - if a packet failed, just resend current. Likewise, the receiver was able to simply take the incoming update and make it fact - the reconcile process would ultimately be far easier than the alternative which amounted to checking if a user existed on the document, removing if they did and erroring if they didn't. That level of detail was immaterial to the sender, the sender simply wanted to publish *the* authoritative list of users and nothing more.

Too bad it took a practical implementation to realize most of the early protocol design wasn't needed :) Kind of reminds me of the recent SOAP vs. REST debates.

Monday, January 28, 2008

Really Deep Dynamics

Patrick loses me with this statement:

"Face it. The history of programming is one of gradual adoption of dynamic mechanisms."

I read the post as saying the equivalent of "automatics gradually replaced manual transmission". Sure, it's true depending on how you look at the problem, but I'm not convinced it is really a practical, applied endorsement of dynamically-typed languages. I'm not sure it makes the most sense if I'm worried about demonstrating a performant, maintainable, visually-appealing result to my customers. You may like that automatic for giving average Joe customer a test drive, but it's not what you take on the Autobahn and it's not what you use to haul a big load cross country. More likely, the automatic feature is what you sell the masses who can't use the more precise, focused machinery.

Looking at some of Patrick's points:

  • "The problem is they are rarely asked by open-minded people." - seems ad hoc and condescending from the beginning. Pot, kettle meet off-white. I'll try and ignore that tone for the rest of the response although it does resonate through the rest of the post.
  • "I know large systems can be built statically. I've done it in more than one language. I've also done it dynamically in more than one dynamic language. Have you?" - I'm guessing Perl and Python meet this requirement on the dynamic side; pick your poison on the static side, C, C++, Java even C# or VB if you like to make me suffer. From my experience, in code bases of roughly 100 KLOC in size or more, having a compiler in place to check that you haven't made any stupid mistakes was actually helpful. Note that I said helpful, not sufficient. The problem I have with relying on tests for code bases of size is that you assume programmers are capable (as a fallible piece of wetware) of writing near-perfect tests. This is simply not true and as the code grows larger, the chance that your tests are inaccurate increases - this is a function of human nature and deadlines, not any specific programming language, static, dynamic or otherwise.
  • As a counter point, I spent roughly $2000 of client consulting hours debugging an issue in a commercial product built on Python that looked like this:
[foo,bar] = getImage(file, iso_varriant): ...
  • In normal circumstances, this returned a tuple of two items. However, in certain file system encodings, this returned a tuple of three items. Python, being its dynamic self, chose to ignore the third item in the return tuple under those conditions. In a full unit and integration test suite, the test for this method passed. In the runtime, it failed because the third argument was ignored. There was no good way to simulate the test because you could only reproduce the pain when you physically had a CD in the drive (due to the way the python interpreter handled the ioctls). In other words, having the world's most awesome, dynamic test suite failed but was still syntactically correct. Having a dynamic language made the failure pass silently where as a statically checked return type would have barked before the program made it to test or bailed with a stack trace in the runtime. Conversely, the Python interpreter quietly chugged along until the failure occurred > 20 lines down in the stack making it even more difficult to diagnose. Not always the case to be sure, but I think you can make equally valid arguments on both sides of the fence if you've used both static and dynamic languages in the wild. Relying on tests for refactoring and coverage of codebases is a luxury of small to mid size code bases. Inevitably, if your code is big enough, it will touch enough edge cases that the tests do not cover all conditions and changing a method will result in runtime errors, not test failures. That is a function of our cognitive limits as humans, not any programming language.
  • "On the other hand nothing eases the pain of bad projects." - absolutely. Personally I don't see anything about static languages that makes this more likely. From my experience, bad PM can railroad any initiative, static or dynamic.
  • "Face it. The history of programming is one of gradual adoption of dynamic mechanisms." - so true. Except, it is true in the context of static languages becoming dynamic, not in the sense that dynamic languages are conquering the world. Consider the success of static languages since the 70's:
    • The browser I'm using to post this blog is written in C++. Cross-platform, dynamic C++ but statically-typed C++.
    • The system libraries this browser uses are built on C.
    • The windowing libraries this browser is using are built on C and C++.
    • The OS the windowing libraries and system libraries used by this browser are built on C and expose strongly-typed C APIs.
    • The iTouch I used to read the original post was written almost entirely on C.
    • Nearly every device, feed reader, web server and browser that parses this content will be written in C or C++.
    • The browser you are using to read this was almost certainly written in C or C++. It might use some JavaScript, but that is optionally typed (based on the current ECMAScript spec and is interpreted by a C-based interpreter).
Conversely, I'm not using a browser built on Smalltalk. My system libs are not built on Scheme. In fact, to be honest, none of the utilities I use on a daily basis are built on a dynamic language (ObjectiveC being the potential crossover and its intents are not to be dynamic). There may be a few Java applications on my MacBook Pro that I use regularly, but even those are heavily reliant on kqueue - a kernel facility - and native windowing libraries. Firefox is proud to announce new Mac-native widgets, not new Smalltalk plugins.

Are dynamic languages influencing all that native code? Well maybe. I can reload kernel modules and I can link to DLLs or SOs, but loading all that static, native code is done through more static native code. I don't use a Ruby linker, the linker that makes my OS do its dynamic magic is proudly compiled to a low-level set of assembly instructions and usually machine-specific instructions, coded by a handful of really smart people at Microsoft or the GNU Foundation.

I'm hard-pressed to come up with any end-user applications that I find useful which do not directly depend on a statically-compiled something. Yes, some of the web applications I use leverage Ruby, Seaside, etc., but the browser or RESTful client library that access them is (for me anyway) immediately based on a statically-typed library (Firefox, libcurl, etc).

So, I'm probably missing Patrick's point. Most of the innovative apps I use don't depend on dynamic programming languages. In fact, if you took away many of the languages Patrick sites - Scheme, Lisp and Smalltalk, I would be able to go about my day without a single glitch. Conversely, take away C, C++ or Java, and I'm pretty sure I'd notice (i.e. no OS, > 75% http traffic, ~50% of http applications servers respectively).

Friday, January 25, 2008

XMPP For Integration

Integration bloggers have been talking about using XMPP for years now. The XMPP community has entire specifications dedicated to things like RPC, Service Discovery and PubSub that make weaving together distributed systems easier (I think the native presence information helps as well). Matt Tucker gives a solid overview of XMPP in the integration space with a new post that is getting a lot of press coverage and shot to the top of Digg. I feel like XMPP's potential as an integration tool is finally starting to get the attention it deserves. Maybe a result of the imploding WS-Death-Star stacks and heavy-weight standards that have plagued integration developers for years? Maybe a natural gravitation towards a better technology stack - hopefully.

Sunday, January 20, 2008

River Disappoints Again

I had a desire to see if I could get Jython or JRuby working with Apache River this weekend for some distributed monitoring and task execution I want to try. Need something quick, reliable and with code mobility and ideally can integrate easily with Java. I did see some new updates on the River site which was initially encouraging, but after digging, looks like little has changed. There were no source distributions which was concerning, but I'd used the older Sun distributions of Jini before, so I wasn't scared about building from source. I proceeded to SVN check-out. Sadly, the listed SVN URL doesn't work - there is nothing there.

Walking up the tree to the /asf/ root, there is no sign of the source or the river project. I'm sure it is around somewhere, but seriously, how much less approachable could this project be? This is no way to attract new users. I didn't think it could get worse from the old Sun site; I was wrong. What a shame.

Looks like it's back to Rinda then. I've avoided Rinda in the past for important data given that there is no reliable storage of tuples. At this point, probably easier to bolt that on to Rinda than to pull anything meaningful out of River. Alternatively, maybe a better approach would be to make smarter Rinda clients that can survive a failure of the master ring server with some local persistence and eventual consistency approach.

Thursday, January 17, 2008

CSRF is the new XSS

I've been looking at Chris Shiflett's CSRF GET to POST converter and have to say that it's got me a bit freaked out. I don't normally do the annual prediction thing, but the more I look at it, the more I think we'll see 2008 as the year of the CSRF, particularly if social networking sites continue to grow in popularity among less-technical users.

I've seen mention that the attack can be mitigated by using a nonce of sorts in the form and session data, a value that must be posted back with the form for a valid request. But as I look at how Chris executed his redirector (loading an iframe on click), I can't help but think that once I have an unprotected and confused security context on the browser, I'm able to work around such nonces when:

1) I can make a reasonable guess that the user has an authenticated session. I don't always have to be right, just every couple of times will do fine.

2) I can parse out a response from the server such that I can snag the nonce parameter. Not hard once I have a confused, unprotected scripting context in an iframe that the browser is mistakenly trusting.

#1 is not a big deal. I can make some reasonable guesses, target my links appropriately using a bit of social engineering. Doesn't have to be terribly accurate, just moderately successful and I can grab some "seed" accounts from which to stage future attacks.

#2 is easy - if I can execute script in an iframe (as Chris' demonstrates), then I can string together multiple XHRs. As long as the user has a session I can piggy-back on, extracting a form nonce is no more difficult than submitting the post in Chris' redirector.

Would love to hear some thoughts on how others are dealing with the issue in the face of user-provided content.

Oh, and if you're wondering how to execute one of those attacks, you have a Blogger account *and* you clicked on the link to Chris' account above, consider yourself vulnerable - that's about all it takes.

Sunday, January 13, 2008

+1 To Yahoo! Games '08 Predictions

Yahoo! Games is spot-on for their predictions of the upcoming gaming year.

In particular, I think we'll see the following:

1) The Wii will jump the shark - I played the Wii for several hours over the holiday season and while it was fun, the novelty wore off quickly. I tried like mad to will Wii Sports into an online mode but to my dismay, that doesn't exist. You can only swing a virtual tennis racket in the confines of your own living room for so long while your neighbor is playing madden with friends across the country on a system two generations ahead of the Wii.

2) In spite of being the worst-marketed, most technically advanced gaming platform ever devised, the PS3 will rebound. For $200 more than the cost of a Blue Ray player, you also get a multi-cell processor on a linux-driven beast of a gaming system that helps society by lending it's processing power to protein-folding analysis in the off hours. I have a ton of stories about how its setup was better than the Wii's or XBOX 360, but those will have to wait for another post. In short, PS3 continues to lag in online content, but eventually I still believe that the hard-core gamers who spend the most money will continue to gravitate towards the PS3 and away from the Wii and the developers will follow. The early adopters I know (including me) are moving towards the better platform, in spite of Sony's seemingly insane approach of delivering the platform. I'm convinced devs will follow, nobody wants to build on the gaming equivalent of Visual Basic when the most advanced 3D platform ever is growing it's install base.

3) PC gaming will continue to deteriorate. I love my PC-only games. I think most games being developed today actually play better with a mouse and keyboard rather than two awkward analog sticks under-thumb. And while I've really loved playing WoW for the last few weeks, games like Crysis are simply making me miserable. By all accounts, I have plenty of hardware to play Crysis, but I still get Vista core dumps on a regular basis. As a result I'm rebuilding my awful NVRAID mirror on a regular basis (I'll never buy an NVidia RAID solution again, that monster of BIOS + Vista + Software RAID has cost me more time and pain than a decent hardware RAID solution ever would). Crysis is the best FPS I've ever played. It's immersive, well-designed and capable of making even the most modern GPUs sweat. None of that scares me, it's Vista's instability in 64-bit gaming and the resulting constant recovery exercise I'm forced to do that will push me closer and closer to purchasing console-only games.