I'm wondering how glTexImage2D would work on a system with a shared memory. It kinda seems like it would just copy the data into another location and maybe do a format conversion... That would make blitting an image slow compared to glDrawPixels
That is essentially what happens, shared memory or not. A copy is required by the API, so you can release the application side memory immediately after the glTexImage2D call. And the implementation may optimize the data format for use by the GPU. You are right that glTexImage2D is not the ideal solution for blitting an image once, that's not what it's designed for.
I'm also wondering if GL_ALPHA can be expanded to RGBA in the fragment shader instead of in the function call. You would only need alpha data stored for a font which the fragment shader might color.
Of course. I don't think any existing OpenGL ES implementation expands alpha textures to RGBA in memory.
Also will there be any interface for the 2D hardware and can you combine the 2D blitting with 3D without problems with VSync, double buffering, etc?
I'm not sure what kind of interface you are referring to here. Combining 2D and 3D is often tricky, but that depends on the device. In some cases it's possible to use overlays to independently draw 2D and 3D, and only combine them in the display controller.