The things in Android that keeps tripping me up.

android:layout_weight

When building a layout, the biggest thing that keeps going through my mind is “how do I get this object to lay itself out so it consumes only what is left in a linear layout flow?”

And the answer to that is android:layout_weight.

If you specify a layout and you want one of the controls to land at the bottom of the screen with a fixed height, then the other control in the LinearLayout should be set with height “match_parent”, and weight set to 1. This causes it to consume the rest of the space. (Bonus: you can split the view by having multiple controls with different weights, and you can even achieve an effect such as one control taking a third and the other two thirds, by using appropriate weights.)

android:gravity

It’s the other one I keep forgetting about. It allows you to center something on the screen, or flush it to the right, or whatever. Apply to the view to control it’s positioning inside the container parent.

I also have some code lying around here which helps to control multiple activities where a single activity would normally live, that I cobbled together by reading this post; he goes into how to extend an ActivityGroup to achieve multiple activities within the same tab group item. I think this principle can be extended to support other interesting effects, such as having a list view where each row in the list is it’s own activity. But that’s something I need to plug away at to see if I can make it work.

And then Apple changes the Rules. Again.

So I uploaded J2OC, and had lost interest in it. After all, who needs a second “let’s recompile Java into Objective C” in order to build iPhone and Android applications, if Apple isn’t going to allow it?

Then Apple does this: Statement by Apple on App Store REview Guidelines

In particular, we are relaxing all restrictions on the development tools used to create iOS apps, as long as the resulting apps do not download any code. This should give developers the flexibility they want, while preserving the security we need.

What I would ideally want is a Java VM kernel that can be linked into an iPhone application, one capable of running a jar file. Because ideally I’d like to write model code in Java–so I can port that model code to Android. Yet I don’t want UI bindings into the Apple API–I’d rather just build the UI twice, while the (more complicated) model code remains the same.

Thank you Apple. Maybe I’ll document J2OC better and provide some sample programs. It really is a cool little bit of technology. 🙂

Something Funny Happened To Me On The Way To Release.

So I started playing with parsing Java class files, creating a cross compiler capable of converting Java class files into Objective C files. I even had a sufficient amount of Apache Harmony running so I could use a good part of the java.lang and java.util classes; roughly in parity with the GWT cross compiler that can compile Java class files into Javascript.

Then Apple dropped the “no cross compiling” bombshell.

Now, keep in mind that I’m just me, tinkering on my spare time during weekends. I don’t have the desire or the time to go up against Apple. I’d rather allow the XMLVM project (which has a well established ecosystem, or so it seems) to decide to go (or not go) against Apple’s wishes.

Then time went by, and I sort of lost interest in this thing.

So I’ve taken the liberty to post the source code here: the Java to Objective C Compiler sources, and the J2OC RTL, which contains a subset of the Apache Harmony project, and implementing the java.lang and java.util classes.

It’s been an interesting project, and hopefully in the next few weeks I’ll document how this all works–including the wierdnesses and pitfalls I came across with the Java VM to get Apache Harmony to work. (Nothing like working through a very large collection of class files to find all the fringe cases.) The output code was intended to be human readable–but it really isn’t for some expressions.

But I’ll describe that in the next few weeks.

And at some point I’ll post an example iPhone application which includes Java code.

Note that my approach was different than the XMLVM project. Instead of providing Java bindings of the iOS libraries, my intent was to only allow the compilation of a computational kernel, then have the user provide the UI elements separately for Android, the iPhone, the iPad, and whatever other target the code was to compile for.

So you won’t find a turn-key solution for recompiling Android code and have it run on the iPhone. You should really check out the XMLVM project instead.

All this code, by the way, is being published under a BSD style license: go ahead and use the code, but leave me out of it and don’t blame me if it goes haywire.

separator.png

While I don’t intend to get into the functioning of the compiler, I will give a taste of how the code works. The bulk of the .class file parser, which reads and loads the .class file data into memory, is contained in the class ClassFile in com.chaosinmotion.j2oc.vm. This class takes in its constructor an input stream opened to the first byte of a .class file, and loads the entire class file into memory.

Once read, the entire class file can be accessed using the getters associated with that class. The bulk of the code contained inside the .vm (and subpackages within .vm) are used to represent the contents of the class file. The .vm.data classes contain the various data types used to store the meta data within a class file (such as the method names, the attributes fields, and the like), and the .vm.code classes contain a code parser to convert the code within the .class files into an array of processed instructions.

Once the instructions are parsed (by the vm.code.Code class), the code in a method is represented as an array of code segments; a run of instructions that starts with an instruction first jumped into by another instruction, and terminates with either the end of the method or with a jump instruction. In other words, a CodeSeg (Code.CodeSeg class) is a section of instructions that always enters at the first instruction and executes sequentially to the last instruction in the segment. Additional information, such as the list of variables that are used when the segment is entered are noted; this is the current state of the Java operator stack as this segment is entered.

Ultimately the code parser and class file reader represents the code in a .class file in memory in an intermediate state that can then be used to write Objective C with the WriteOCMethod class (com.chaosinmotion.j2oc.oc). A class, CodeOptimize (.oc package) provides utilities that determine if code preambles must be written for memory management or for exception handling: memory management preamble does not need to be written if I never invoke another method. (This is the case for simple functions which return a field or does simple math.)

The theory is that in practice, it should be possible to replace the code writer method with a writer method capable of writing a different language, such as C++ or C.

separator.png

In the future, when I have more time, I’ll write more about the J2OC project. But for now, if there are any segments or parts you want to use or play with, be my guest.

Annoyed.

On Android, it appears android.graphics.Region (which is used internally for clipping against a Path) is not anti-aliased. This means if you set a clipping path that is not rectangular, such as a rounded rectangle, then the interior clipping path will have jaggies.

My workaround for my rounded rectangle clipping with a border is to draw the border after drawing the interior, and allowing the stroke to be wide enough to cover the jaggies. But IMHO it’s a kludge.

Why I hate custom protocols over HTTP.

One recent trend is to use HTTP in order to send data between a client and server. Between protocols built on top of SOAP and XML/RPC (and yes, I’ve built code on top of XML/RPC, and have a Java XML/RPC library), it’s not all that uncommon to send text commands over HTTP.

And it makes sense: HTTP generally is not blocked by various internet providers while other ports are firewalled, and it is well supported across the ‘net.

As a rule, however, I’m generally opposed to overriding an existing protocol for private use. My instincts are if it is possible for me to open a port from a client to the server that is not in use by an existing protocol, then use that port instead.

With HTTP, there are a number of downsides. HTTP is essentially a polling protocol: ask a question, wait for an answer, get an answer. There is a lot of plumbing that has gone into HTTP in order to work around the performance issues revolving around HTTP–but because it is essentially a polling protocol, there is little you can do to bypass a resource that takes a long time to download besides opening up a second connection. (Protocols like LDAP allow multiple logical connections over the same physical TCP socket.)

HTTP has also become somewhat more complicated over the years, with things like optional keep-alive settings and an array of possible return codes. All of this makes sense if you’re building a web browser (though some of it is a bit over-engineered: I don’t know if 418: “I’m a teapot” is a joke or a sarcastic response to things like 449: “Retry With”), but for a simple RPC protocol, we really don’t need more than “success”/”failure”/”exception.”

And today I learned another thing that just confirms my “don’t override someone else’s protocol; just build your own” instinct.

As designed the client I’m working on initialized a connection by requesting information on static resources that may have changed. So I’d do an “init” call, and wait for a response. As part of the request call, the server team specified that I should send “if-modified-since” with the date of the last response, so I can tell if I should update the cached response. (This was modified from the original idea, which was to simply use an integer version.) This client runs on Android both over WiFi and over the cell network.

You can guess what happened next.

Yes, T-Mobile rolled out a new proxy server to reduce 3G network traffic by automatically detecting and caching server responses, and sending ‘304’ errors on the init call. Well, if you send ‘if-modified-since’, you better process 304 errors, right?

My client didn’t.

And so it means the 130,000 people running our client software–died. Hard.

The first time you run the application it would sync up just fine. But the next time you’d hook up, T-Mobile would detect that your request hadn’t changed, and would send a 304 response–which the client would not understand, and eventually shut down claiming the client could not connect to the server.

And we never tested this. Of course we never tested this. Our server never sent the 304 exception, and so we never had a way to test this. In retrospect, of course “everyone knows” that if you send an if-modified-since, you should process the 304 exception.

The fix was simple, as all such things tend to be once they are discovered and understood.

But it would never happened if we had never overridden an HTTP protocol, where there are layers we don’t fully understand (until they break) running on a network which can insert any ol’ proxy (some with bugs we may never understand) between the client and the server.

Why we shouldn’t unify mobile and desktop UI frameworks.

John Gruber: Google’s Microsoft Moment

It makes no sense to me why Chrome OS isn’t based on Android. Maybe there’s a good answer to this, but Google hasn’t given it.

While I don’t understand why Google has two completely separate operating systems, one based on Davlik (a Java VM clone) and another based on Javascript (sharing more in common with the Palm WebOS platform than with Android), I do know why no-one will ever be able to successfully create a framework that unifies mobile and desktop operating systems. It’s why Apple has a separate MacOS and iPhone OS UI framework.

It’s because of the needs of each platform.

Bottom line is this: on a desktop application you can create a rich window and environment that displays everything at once. The best example of this is the Apple Mail application: in one window I see all of my mailboxes, the status of all of my mail boxes, the list of mail in my selected mail box, and the current selected message.

Mobile applications, on the other hand, have limited real estate to work with. Thus, instead of having a single window, the iPhone version of Apple Mail has a separate windows for showing accounts, showing mailboxes in an account, showing messages in a mailbox, and showing an individual message. Within a mobile version of the software we also have a notion of a “view stack”; a stack of modal views which push and pop onto a stack of views–something that has no counterpart in the desktop world.

The development model also differs: on the desktop I may have a model and controller object (from the MVC design pattern) which drives multiple views simultaneously. But on the phone, I need multiple controllers attached to the same model, each controller contains information about what was selected higher up in the view stack.

Knowing that, we can answer the question that John Gruber links: “If I make a screen two inches smaller, should I use Android instead of Chrome OS? If the keyboard works with my fingers instead of my thumbs, I should use Chrome OS and not Android?”

The answer is simple: is the screen so small that your application must be represented as a stack of views (like the iPhone mail application) or can everything relevant be placed into a single window (like the Apple mail application)? If the first answer, use the mobile version of the operating system. If the latter, use the desktop version.

Touch screen dead zone.

While chasing down a usability bug, I discovered something interesting about the hardware for the Google developer phone, which I also suspect plagues the release G1 and G2 phones. The problem is an issue with the touch screen technology used by HTC.

The bottom line is this: there is a border to the left and right of the screen (and, I suspect, the top and bottom) where the finger’s location is not reported. On a piece of test software, when dragging my finger left to right, I found that when my finger was within 20 pixels of the left border, the position was reported as 0–and on the right border, as 319.0. This, despite being able to see my finger’s location reported with one- and two- pixel increments elsewhere on the screen.

I’ve also found that finger-down events within that 20 pixel border are not reported, unless accompanied with a drag outside of that 20 pixel border. Thus, if you design a UI where the user is expected to tap within 20 pixels of the border, the tap event will not be detected by the current hardware. A tap and drag, however, will be detected, which is why when you tap in the notification area at the top of the Android screen it doesn’t necessarily work–but a tap and drag will reliably open the notification area.

Just something to keep in mind when designing for the Android: tap areas near the left or right of the screen that may be 40×40 pixels in size will have the unfortunate problem that half of the potential touch area will be dead to tap events.