Four ways of using a mobile phone.

I’m starting to conclude there are four ways a user will interact with a phone. These four modes or four methods are:

(1) Checking incoming messages or current device status. This should be very quick and unobtrusive: you should be able to just press a button or two and get the status you want.

(2) Sending a message. This should also be relatively quick, though often people are willing to spend time composing messages, such as SMS messages or quick e-mail messages. And in this category I’d put chatting away on the phone: you’re involved with the device communicating to someone else.

(3) Searching for something. This may wind up being involved, but ideally this should be somewhat location-based for some types of searches.

(4) Playing games. You’re sitting around for a few minutes killing time, and you want a way to pass that time.

Some things that sound like they should fall into one category really fall into another: surfing the ‘net on the iPhone, for example, strikes me as an activity similar to playing games–a way to kill time. Other activities may also sort of fall into multiple categories: for example, if you search for a movie theater to see what’s playing, is further followups to that theater a ‘search’ activity or a ‘status’ activity? (In other words, would you want to revisit the movie theater to see what’s playing on a different day, or do you want to be alerted 20 minutes before a movie at that theater starts so you remember to wander back to catch the movie?)

I think one of the mistakes people are making with respect to the phone is that they are treating it as a desktop computer, rather than as a sophisticated communications device. More than once I’ve seen people try to suggest the iPhone become an ‘immersive’ environment. How immersive can a 5″ diagonal screen be?

Software Development Tools

Many years ago I got a DSL connection with a fixed IP address. I paid a fortune per month to run that fixed IP address, but the nice thing was being able to put my server on-line so I could run my own e-mail server, my own web server, and my own source control system.

A couple of months ago I switched to a new ISP which was cheaper–but I gave up the fixed IP addresses in the process.

On the other hand, there is no need for having my own IP address when for less money I can sign up for the following services:

Google Apps: Free e-mail and shared documents.

SVN Hosting: I am using the $7/month level service at SVNRepository.com; this gives me theoretically unlimited repositories and users and a 2gig storage cap.

Web Hosting: I’m using the $7/month level basic service at LunarPages.com; I can park all of my different web sites on the same account (and even host different page collections from one account), and for an extra $2/month they’ll also host my JSP pages.

So I’m out $16/month on the above services–but the fixed IP address was running me $30/month, and it required me to maintain a separate computer, constantly update and maintain the software there. I don’t know how secure SVNRepository is: if I’m running something mission critical or creating software worth hundreds of millions of dollars I’d probably run my own SVN server. But for synchronizing my code across three different computers and providing me the ability to share code bases with other people, it’s pretty damned cool.

And so a new chapter begins for our plucky hero.

The application checklist for applying to graduate school includes the following:

  1. A completed application form.
  2. A Statement of Purpose
  3. At least 3 letters of recommendation.
  4. Additional information, such as abstracts, resume, list of publications, etc. Essentially a CV or the work equivalent.
  5. GRE test scores.

I’ve completed #5 in the list above. I have until December 15th (realistically, December 1) to complete the rest. (And I got an 800/610–respectable, though I thought I could do better on the verbal. But in a test environment, you choke–and I choked hard. Assuming the essay portion of the test went well, I think it’s good enough.)

If all goes well, then by next spring I should hear from one of the three schools I’m applying at: either Caltech, USC or UCLA. And if this all works out as I hope, then I will be entering the Ph.D program in Computer Science.

Why a Computer Science Ph.D.? Because I promised myself 20 years ago when I left college that someday I would go back and get a higher degree. And now is the time to complete that promise.

Productivity Killers.

Cube Farms. The research that has been used to justify putting programmers into ‘veal-fattening pens’ is flawed; most of that research was done using graduate students or undergraduates attempting to solve a common problem. The thing is, if you have two or three people who are trying to solve a problem that none of them have ever seen before, cooperation is usually better than isolation–but if you have one person who is on a roll writing code and doesn’t have any real unknowns (other than the specific problem he’s writing code for), then interruptions are bad. (If interruptions were not bad, then they’d rig bells that ring every five minutes.)

If you are good at what you do, you do not need to be pestered every five minutes.

Underpowered machines. I cannot believe in this day an age that a 40 year old software developer making $100K+ a year (where “+” is a fairly large “plus”) writing Java code would be given a three-year old underpowered computer that new was only $600. But that’s where I am–and we’re not talking about some dumb-shit hole in the wall somewhere–we’re talking Ya-friggin’-hoo!, for Christ’s sake!

A Pentium 4 at 2.6mhz with 512meg of RAM and a 40gig drive with a 17inch monitor is simply unacceptable as a development platform–especially with a model that steals 32meg of that for video RAM–yet here I am. And all requests to IT has been met so far with stonewalling and silence.

Fucking nuts.

Pruning Non-Determinacy verses Thinking Ahead.

Oh, look; now we have multiple cores inside of modern day computers. Today you can get a laptop with two processor cores in one chip for cheap, and soon Apple will be releasing a computer with 8 processor cores spread across two microprocessor chips. Noticing that the workaround being used today to allow Moore’s Law to get around physical reality is to stuff more processor cores onto a chip–which requires multithreading and multitasking to take full advantage of it–several articles have come across which attempt to illustrate the pitfalls of multithreaded programming.

Except, well, do they?

Exhibit A: The Problem with Threads, which attempts to illustrate the problem with threads by discussing the Observer Pattern:

Consider the observer pattern,5 a very simple and widely used design pattern. Figure 1 shows a Java implementation valid for a single thread. This shows two methods from a class where an invocation of the setValue() method triggers notification of the new value by calling the valueChanged() method of any objects that have been registered by a call to addListener().

The code in Figure 1 is not thread safe, however. That is, if multiple threads can call setValue() or addListener(), the listeners list could be modified while the iterator is iterating through the list, triggering an exception that will likely terminate the program.

To reiterate, because I don’t feel like copying his code across, the observer pattern example given is simple: someone calls “setValue”, and this sets the value for the object as well as tells everyone else about the value that was just set.

Then we go on for several paragraphs talking about why the code is not thread safe, to which all I can really add here is, well, no fucking shit, Sherlock, it isn’t thread safe.

Frankly, not only is it not thread safe, but the solution doesn’t even make any sense in a multi-threaded environment!

What are you trying to do here? Well, when someone comes along and changes the value, you’d like to tell everyone who cares what that value changed to. Simple enough, right? But in a multi-threaded environment, what does it mean when thread 1 changes the value when thread 2 also wants to change the value? Are the objects who want to be notified thread safe? What does it mean when a central value is changed twice by two threads? What is the semantics here?

In a multi-threaded environment, assuming the Observer Pattern should somehow be made to work overlooks a rather obvious question: what does it mean for two threads to change the same value? And the corollary: when we are notified do we just need to be told that something changed so we can refresh something–like a UI display widget–and so we just need to know the last time it changed? Or do we need to know every value transition in the order in which they occurred, because we’re driving a state machine?

In other words, by focusing upon the design model to proclaim that Threading is hard, we’ve avoided the question of what the hell it is we’re doing. And if we could answer the question “what are we trying to accomplish here,” perhaps we could then solve the problem by using an appropriate design pattern rather than trying to extend a design pattern where it doesn’t belong.

So, say instead the problem is you want to just be notified the last time something changed, so you can update a user interface element. We know that Swing runs everything within its own UI thread–so it means our setValue routine will need to use Swing method EventQueue.invokeLater() on a Runnable object which then sends out the notification to our listeners that the value has changed. Then, when the value has changed, we can determine if we’ve inserted an event into the Swing event queue and see if it has fired yet. If it hasn’t, we’re done. If it has, create and insert a new one.

The implementation is straight forward:

public class ValueHolder
{
    private List listeners = new LinkedList();
    private int value;
    private boolean fWaitToFire = false;
    
    public interface Listener 
    {
        public void valueChanged(int newValue);
    }
    
    private class FireListeners implements Runnable
    {
        private List copyOfListeners;

        FireListeners()
        {
            copyOfListeners = new LinkedList(listeners);
        }
        public void run()
        {
            fireListeners(copyOfListeners);
        }
    }
    
    public synchronized void addListener(Listener listener)
    {
        listeners.add(listener);
    }
    
    public synchronized void setValue(int newValue)
    {
        value = newValue;
        if (!fWaitToFire) {
            EventQueue.invokeLater(new FireListeners());
            fWaitToFire = true;
        }
    }
    
    private void fireListeners(List list)
    {
        int localValue;
        
        synchronized(this) {
            /* Grab value and reset wait to fire. If someone else changes me while */
            /* I'm iterating the values, that's okay; I'll just get fired again later */
            /* Since we're invoked from FireListeners internal class, we're already */
            /* in the main Swing thread, so there are no multi-threaded issues with */
            /* the list iterator here. */
            localValue = value;
            fWaitToFire = false;
        }
        
        Iterator it = list.iterator();
        while (it.hasNext()) {
            ((Listener)it.next()).valueChanged(localValue);
        }
    }
}

Notice the two principles I’ve used here: first, keep the amount of code in the synchronized block as short as possible. That way, the size of the non-syncronous section is kept to a minimum, and parallelism is maximized. Second, I’ve used a monitor flag, ‘fWaitToFire’, which is used to determine if we already have an object queued up in the Swing thread that hasn’t been fired yet. This permits me to minimize the number of times we carry out the expensive operation of creating a copy of my listener list. Since we’re only interested in knowing when the value changed, but not being notified every time the value changed, this may only notify us once if the value is changed twice, right in a row.

Suppose, however, that we need to be notified every time the value has changed, and in the order the value has changed to. Then we need to implement a thread queue: an object which doesn’t just store the current value, but a queue of all of the values that were held by this object. A thread then owned by that object runs in the background, peeling off entries in the queue and sending them out to the listener in order.

Because it’s late, I will leave the implementation of a thread queue to the reader.

There is one thing in this article I do agree with wholeheartedly:

I conjecture that most multithreaded general-purpose applications are so full of concurrency bugs that—as multicore architectures become commonplace—these bugs will begin to show up as system failures.

Absolutely.

And if the current thinking shown in the above linked article prevails–of applying inappropriate design models to a multi-threaded environment without giving a single thought as to what the original problem was–then we’re not going to find any good solutions anytime soon.

Because ultimately the goal seems to be tackling the wrong problem. The problem is not

…[adding] mechanisms [to] enrich the programmer’s toolkit for pruning nondeterminacy.

No, the goal should be to think about what the hell it is you’re trying to accomplish in the first place, and using the correct design model so as not to create non-determinacy in the first place.

On that note I’ll leave you with two thoughts. First, we have the article Thread pools and work queues, of which my outline of an alternate mechanism above for sending out every value change as it happens really is just a work queue with a thread pool size of 1 thread.

And the second is not to fear thread deadlocks–just remember the following rule: if you always lock in order and unlock in reverse order, you’ll be fine. That is, if you have three locks, numbered 1 through 3, so long as you always write your code so that lock A locks before lock B if n(A) < n(B), then you will never deadlock. That’s because deadlocks occur if thread A locks 1,2 and thread B tries to lock 2,1: thread A is waiting for 1 to come free while thread B is waiting for 2 to become free. But if you always lock in order–then you will never have code that deadlocks because you will never have the code in thread B: it locks out of order.

Sometimes that can be damned hard: it can involve rethinking how some code is written, locking locks early, for example, or even adding what seems like unnecessary thread work queues or other stuff. But if you follow the rule above you will never deadlock.

And provably correct code wins over “pruning nondeterminacy” every day of the week and twice on Sundays.

With this in mind, you can imagine the constant mental anguish that was going on inside of my head as I read the following article, submitted into evidence as Exhibit B: Barrier. I mean–deliberately trying to create non-determinacy in order to demonstrate pruning techniques? Why not just give some thought as to appropriate multi-threaded design models and techniques for proper design?

New section in Wiki

I’ve added a new section to the Wiki to store articles I put together to track how to do something. This new section, called “technical notes” will contain notes about how to string a technology together to make it work.

The first article is an overview of the work I’ve done so far to understand Java’s Drag and Drop.

Style Guides

A question came up at work: I had rewritten a bunch of code and the fellow I’m working with was unfamiliar with my programming style guide. (Why are all your structure fields starting with lower-case ‘f’? And why do globals start with ‘g’?) His programming style came from the Windows Hungarian Notation world, so it’s not like this was a really dumb question. (Why are all of your fields named like their floating point numbers?)

But I cut my eye-teeth on C++ way back in 1990–and the programming style I adopted way back then came from Apple’s unofficial C++ style guide.

After 17 years bad habits are hard to break. 🙂