I have ported XScreenSaver to the iPhone.

XScreenSaver 5.15 is out now.

Along with the usual set of minor improvements, this version runs as an iOS application. As all of the 3D modules in XScreenSaver are written against the OpenGL 1.3 specification, and as the iPhone only supports OpenGL ES 1.1 and newer, this was something of a big deal.

I accomplished this by implementing most of the OpenGL 1.3 API in terms of the OpenGL ES 1.1 API.

It's not in the app store yet. If you have Xcode, please build the "XScreenSaver-iOS" target and check it out in the emulator or on your own hardware. Please let me know how it works for you! (Update: It is in the app store now.)


I wrote this because you are all idiots.

Specifically, if you were involved in the OpenGL specification between 2003 and today, you are an idiot.

Allow me to explain.

Let's say you have a well-specified system that is in wide use (a language, a library API, whatever) and because of changes in some substrate (operating systems, hardware, whatever) you find that you need to add a new way of doing things to it.

The way you do this is, you add new features to the specification and you clearly document the version in which those features become supported.

If there are old features that you would like to discourage the use of, then you mark them as obsolete -- but you do not remove them because thou shalt not break working code.

If you don't agree with that, then please, get out of the software industry right now. Find another line of work. Please.

Your users have code that works. Maybe the new APIs would serve them better. Maybe things would be so much more efficient if they updated their code to use the new API. Or maybe it doesn't matter to them and they just want working code to continue to be working code. At least until such a time as they need the new features, or new efficiency. Remember the First Rule of Optimization: DON'T.

You may see where I'm going with this.

OpenGL was invented at SGI in 1992, and it served the world well for a decade. Generally the API worked like this: you'd position your lights and observer; you'd translate and rotate to where your object was to go; and then you'd say glBegin, specify your vertexes and normals, and then glEnd. The library would then take that polygon and render it. This was refered to as "immediate mode". To speed things up, you could also store these polygons in a list that could be replayed later, more efficiently.

So in 2003, OpenGL ES came along and deleted 80% of the language. They eliminated "immediate mode" in favor of a syntactically very different -- yet functionally equivalent -- way of doing things: instead of calling glBegin and glVertex, you are expected to put all of your vertexes and normals into an array and then call glDrawArrays on it, to draw it in one fell swoop. (This API already existed, but they made it be the only way to do it.)

People defend this decision by saying that they "had" to do it, because the fixed function pipeline is terribly inefficient on modern GPUs or some such nonsense. These people don't know what they're talking about, because the contour of the API has absolutely fuck-all to do with what goes over the wire.

"We had to destroy the village to save it."

Their claim seems to be that glBegin/glVertex had to be removed from the API, because to do otherwise would impact the performance of the whole system, by, I don't know, forcing GPU manufacturers to add new features to their chips or something.

This is nonsense, and I have an existence proof.

Because I've implemented the OpenGL 1.3 API in terms of the OpenGL ES 1.1 API, and it works fine. I didn't have to install a new GPU in my iPhone to do it.

I did it all by myself, in about three days.

Not me and my team. Not ten years of committees working on hundred-page specifications. Just me. Just to prove a point.

So screw you guys.

There is no sensible reason that something very like the code that I just wrote could not have been included in the OpenGL ES API and library. If people didn't use the old parts of the API, it just wouldn't be linked in. No harm. No bloat. That's how libraries work! But if someone did use it, their legacy code could continue to function. That's how supporting your customers works!

If they really felt the need to go all "Second System Syndrome" and just start over, they shouldn't have pretended that OpenGL ES is still OpenGL. They should have named it something else, like, I don't know, DirectX.


Technical details

To make this work, I wrote a version of the glBegin API that remembers your vertexes and then calls glDrawArrays at the end. Then to make display lists work, I wrapped each OpenGL function to let them be stored in a list. A side effect of this is that the generated glDrawArrays call is what gets stored in the list rather than the glVertex calls.

Then I implemented all the other crap that was missing too, like, "oh, we decided you don't need GL_QUADS, go rewrite your code to work with triangles instead." Jerks.

The code is in hacks/glx/jwzgles.c. If you want to use it to port your own legacy code, just include jwzgles.h. Let me know if it works!

The timeline went something like this:

In early 2010, I thought about porting XScreenSaver to the iPhone. I spent an hour on it and discovered that iPhones don't support OpenGL 1.3. I did a lot of swearing, threw my hands up in disgust and walked away.

Then about six months later, I thought, "maybe I'll just update the code to use some subset of the OpenGL ES API that also works on 5-year-old desktop computers. It was hard to answer the question of, "what is that API", because the OpenGL specifications are a nightmarish mess. I tried to answer the question, "Can I write a keyboard macro or Perl script that will munge my old code into a form that uses the new API?" The answer turned out to be, "hell no". So I threw my hands up in disgust and walked away.

Then about six months later, I thought, "Well, how hard could this be", and I spent a couple hours trying to generate a complete list of the OpenGL 1.3 functions that do not exist in OpenGL ES 1.1; and then the subset of those that are actually used by XScreenSaver. I made a header file. It was really long. I threw my hands up in disgust and walked away.

Then two weeks ago, I had a really bad week at work and needed a distraction, so I sat down and pounded out the code in three days.

As with all things, the first 90% took the first 90% of the time, and then the second 90% took the second 90% of the time.

I had the basics working right away -- I'd say 2/3rds of the OpenGL screen savers worked out of the box. A few more fiddly bits, like figuring out what parts of the texture API are no longer supported, took another few days.

The vast majority of the time was the next two weeks of dealing with the ancillary non-OpenGL-related stuff: building the iPhone user interface, and making sure all the savers reacted sensibly to orientation changes.

There are a few things I couldn't figure out how to implement:

  • Sphere-mapped textures for environmental reflection: OpenGL ES doesn't have glTexGeni GL_SPHERE_MAP, and I don't know how to fake it, so the Flying Toasters aren't shiny.

  • There's glTexImage1D and I'm not sure how to simulate that with glTexImage2D. Oh duh, it's just an Nx1 2D texture.

  • There's no glPolygonMode with GL_LINE, so I don't see an easy way to implement wireframe objects with hidden surface removal. Maybe rendering them twice with glPolygonOffset?

  • Several of the hacks used GLUtesselator to decompose complex shapes into triangles, and I didn't implement that. I could probably port the code from GLU, but it's a huge piece of code and sounds like a pain in the ass, so I punted.

Some iOS-related questions for the Lazyweb:

  • Is there any sane way to have the dialog that pops up when you hit "About" have a clickable URL in it? (My understanding is that putting a UIWebView there would not be considered sane.) Same question for the "Settings" pages: I'd like to have clickable URLs embedded in the free-text of the description field. (Update: I did it with UIWebView. It was an unconscionable hassle.)
  • Right now when a hack wants an image to display, it alterates between a screen-shot of the scrolling list view of the app, and colorbars. Is there any way to get a screenshot of the phone's "Finder" or whatever it's called: the page with all the application icons on it, that was visible before this app started? It also would be nice if we were able to load random images from the phone's Photo Gallery and use those. I gather that's possible with ALAssetsLibrary, but it sounds kind of like a pain in the ass. Can someone show me how I'd select a photo at random and get it back as a UIImage?     Update: David Phillip Oster has the goods!

Anyway, there it is. I hope you enjoy it!

Tags: , , , ,

225 Responses:

  1. Kevin Lyda says:

    Just curious, have you considered putting this in github?

      • fncifc says:

        WHY DONT U JUST GO ALL ON OUT AND CALLS SU RETARD AND ASSWAD? BUT NO U HAVE TO CALL US IDIOT AND SCREW US US OVER? WHAT IS WRONG WITH U?

      • Cris says:

        Oh FFS. Pull your head out of your ass.

      • Jason says:

        Just to be nice! To be friendly. Because people like and use Github and it costs nothing to host open source code there. Because if your code was on Github I could click over and read it in seconds. Because it's currently the best way to communicate via code.

      • ddd says:

        What you did looks useful to the community at large, and conceptually independent from xss. So it would be a good starting point for a library designed to fully implement GL on top of ES. What's more, the community could then address the things you didn't complete, like tessellation, or GLU. Also, it would be interesting to extend the idea to other combinations, like: GL2.0 on top of "strict" GL4.0.

        Does that make sense? As far as I can tell, the license for the added code allows it.

        • DFB says:

          These things are called "thunks" and yes they are very useful.

        • jwz says:

          I wasn't being a smartass, it was a serious question. I never use github and only barely know what it is. I gather it's another web interface for a version control system. These are things I stopped noticing a decade ago. I have nothing against it, I just have no use for it. If someone wants to use the code, great. If they improve it and want me to incorporate their changes, that's what diff and email are for.

          I'm not criticizing. You like it, great. More power to you. I don't see how it's of any utility to me.

          • Jay Vaughan says:

            Well, we could use github to organize issues, manage pull requests, merge stuff from others.. and more importantly: fork.

            But of course, if you feel like manually integrating patches sent from random people by email, thats okay too. But github really makes this a lot more socially friendly and easier for all involved - including the main devs, which in this case would be you, jwz.

            I'd love to participate in xscreensavers on github, personally. I could immediately tackle the issues you've described if I knew it would be useful and administered into the main source repo without too much potential for jwz ire to be rackled.

            • jwz says:

              I do not find reading diffs in email to be a hardship, but I used to walk uphill both ways to school, so I dunno.

              • Jay Vaughan says:

                But, jwz, sharing code is not just about you. Its about the community of people you'd attract if you put it on github for sharing.

                • Rupert P. Fillywick says:

                  This github "community of people" is apparently more interested in pontificating about pointless differences in tools and wanking off to tool complexity than they are in actually getting shit done.

                • James says:

                  If you think it should be on github, why don't you check it in? {{sofixit}

                  • Jay Vaughan says:

                    Good point - I guess I would, but its jwz's project, and ought to be his decision. Well, he's made it, so ..

                  • Zygo says:

                    So about three years ago I scoured the Internet to collect every original xscreensaver tarball I could find and put them into a git repo. I don't have every release, and nothing like original commit messages, but I do have about 90 revisions from 1.17 to 5.17, give or take a few, as well as scripts to munge the tarballs into a linear revision history in case I have to insert a version I don't have between two older ones that I do. It's hard to find unmolested source archives prior to 1998 or so, and if anyone has versions I'm missing I'd like to complete the collection.

                    This came about because I liked xroger and glforestfire, and I figured if hacks were going to start disappearing without notice then I was going to start hoarding code like a code hoarder.

                    I think you can just "git clone https://github.com/Zygo/xscreensaver" (I know I can), but I just created my first ever Github account ten minutes ago so I'm not sure how all the incantations work.

                  • jwz says:

                    Wow, even I don't have any tarballs older than 1.21! Can you mail me those?

                  • Zygo says:

                    I haven't kept the original tarballs--the git repo containing 100+ distinct revisions is 20 times smaller than the tarballs used to generate it, and my web host is too tiny.

                    I'd like to say "well you can just get a snapshot from Github," but I've tried to do that for an hour now, and I'm now pretty sure that I can't.

                    If you go to http://git.hungrycats.org/cgi-bin/gitweb.cgi?p=xscreensaver;a=shortlog;h=refs/heads/master every link named "snapshot" will generate a tarball of that version from the git repo. I kept all the URLs where I found the tarballs in the revision history, so you can try your luck at grabbing those.

              • phuzz says:

                You were allowed to walk! Luxury!
                We had to walk on our hands both ways.

                Mind you, I do come from a long line of circus freaks.

                What's that? This is a thread about OpenGL? Sounds dull.

              • steamer25 says:

                > Why?

                Since you asked, git/GitHub is just a newfangled protocol that automates and (de facto) standardizes the sending and merging of patches. This makes certain things easier and therefore more likely to occur:

                * Patches can be seen by anyone--not just sender and recipient.
                * Patches can be cherry-picked from the command line (or GUI) in a consistent way--no need to copy/paste/tweak from this or that mail archive.
                * The tool has access to the full history of changes (even between forks) which makes it easier to resolve conflicts.
                * Maintains a central list of forks (which in turn democratizes which fork(s) the community is most interested in).
                * If the maintainer of a fork is unwilling/uninterested in accepting a patch, the tool can automatically re-apply the patch locally while merging in mainline updates.

                The bottom line is it makes people more likely to participate in contributing to the codebase.

                • steamer25 says:

                  ...and for those you'd rather not have contribute to the codebase--it gives them some easy outs so they feel less of a need to pester you.

          • ddd says:

            I wasn't being a smartass, it was a serious question.

            ... which is the reason why I gave a serious answer ;-)

          • Jaych says:

            This is a beautiful reply and Thank You jwz. I still don't trust git and all my own projects are still on SVN. I don't use branches nor believe in them. And the git hub web interface is crap. It is.
            Why adopt? I mean WHY ADOPT? Don't. Wait until something is mature before even thinking about using it. Sheesh, I don't even respect Ruby's right to exist, for much the same reasons.

            • If this is satire, it's excellent.

              Unfortunately, I get the feeling that it isn't. Sir, put the keyboard down and step away from the computer. Much like what happened with the dinosaurs, you are not adapting to your environment at a sufficient rate. Yes, I am calling you a dinosaur.

              • nooj says:

                hey; svn still works great for the things it worked great for before git. ie, single user, no branches content management--exactly the use case the poster is talking about!

                don't break his already working system. i think that's the entire point of this blog entry.

                • jwz says:

                  One of the features I really, really miss from LiveJournal is the ability to turn off comments on particular subthreads.

            • Jon says:

              SVN! I lolled. Thanks for that.

              • Zuvembi says:

                Eh, SVN gets the job done. It's far superiour to some of the other VCS systems [1] that I have had foisted off on me. It's had most of the worst of the warts patched over. It's not as good as a modern DVCS, but it works 'okay'.

                [1]Starteam comes to mind.

          • Leonardo Herrera says:

            I would say that you could get more contributions (fixes, patches, etc.) than your current way to manage it.

            Of course, github is just one of the many available sites that do this.

      • Jeffrey Paul says:

        Because Github makes it painless to read code in-browser (e.g. on iPad) for one.

        It also reduces the barrier to entry for people who want to contribute, as pull requests allow for public commentary and review, and allows people to track the current state of HEAD versus just presuming whatever tarball you've thrown up on your website is the latest and sending in patches against that (that are only ever seen/reviewed by you).

        Sometimes reading other people's code or proposed patches is a great educational tool, even if those proposed patches never actually make it in to the package.

        Seriously, spend an hour and give it a whirl. You'll find it's been designed by hackers for the express purpose of making life easier and more productive.

        • nooj says:

          > Seriously, spend an hour and give it a whirl.

          Seriously, he's not interested; end of discussion. If you like, feel free to put it up there and maintain the github for it yourself. You can field all the conversations and boil them down to a nice diff you can email to jwz once in a while. Any changes he incorporates into the official release you can add to github easily!

          Win win.

          • It's good you're around to speak on JWZ's behalf- He clearly can't do it himself since the accident that rendered him paraplegic and half retarded within the last few hours so it's kind of you to step in.

            Wait no that's just what you're implying about him I think.

            • nooj says:

              good call so right i never thought of that

              jwz> many times I found myself not needing to reply
              jwz> because someone else had ably handled it before I got there.
              jwz> I do like it when that happens.

              • handling something, and claiming to know how JWZ /feels/ are two different things.

                • nooj says:

                  so? was i wrong?

                  the real question to be asked is, did my comment do any good? and given that four other people made replies extolling jwz to use github subsequent to my post (including one other person who said exactly the same thing ('sofixit'), i'll go with no.

        • Jairus Khan says:

          This man speaks the truth. Github is a boon for hackers, enthusiasts, and the community at large.

          • Rupert P. Fillywick says:

            Github is a boon for people more worried about screwing around with tools and being their own independent and unique slowfake than they are with communicating with upstream before writing code, writing solid code that meets the upstream project's requirements, and working with the upstream project maintainer to get it integrated into upstream as soon as possible.

            Github slowflakes would rather fork first, publish, and then finally deign to communicate with the project by sending a pull request, but after the code has already been written.

            Meh.

        • Rupert P. Fillywick says:

          Easier and more productive if your goal is to spend a bunch of time wanking around with more complex tools.

          I don't care about making things more educational. I don't want people co-opting my project's name and publicly publishing broken patches for me to then reject. I don't want to deal with this modern git/github wankery.

          • Leonardo Herrera says:

            Do you actually mantain a big open source project this way, or this is what just works in your current work environment?

            • github isn't just big open-source projects. I know financial orgs that use it, too. I know others that don't use github, but use git. Once you get large numbers of contributors changing over time, parallel development, and so on, DCVS is worth its weight in gold, which is why people used to pay for stuff like ClearCase before git and mercurial came along.

      • Kevin Lyda says:

        A valid question, my apologies for not expanding. And I'm not saying github exactly, there are other options - including one done by the company I work for. But github does seem to be on the leading edge in terms of social aspects.

        I guess the reasons could be broken into two parts: git and sharing.

        git: You've just made a major change to support a new platform - and you added a layer on top of OpenGL ES 1.1 to support OpenGL 1.3 code. First that could be useful to others. Second your port is incomplete and others might have time/ideas to complete it. Third others might have ideas to do alternative implementations of what you've already done. Git makes branches relatively easy and allows people to experiment and share their experiments w/o you having to accept the changes they made. Another nice part about git (or hg if you want to use that on bitbucket or code.google) is that backups of all your versions consist of just cloning the entire repo somewhere else.

        github/bitbucket/code.google.com/etc: This gives you a free, central site to host your code. And with git/hg you can host it multiple places and not have your data locked into any hoster. Most sites allow people to easily fork your repo, follow updates, submit patches for consideration and allows some structured give and take about those patches.

        It does encourage a bit more participation and allows you to delegate some or all of it if you choose to (perhaps all your permitting stuff will magically clear up and you'll have fun running your club and not want to deal with xscreensaver for a while). And the participation is mainly driven by code - it's not like a mailing list where people send mails talking about what they might do, they have to actually send you something.

        Besides, you might like git and learn a new, useful tool. Or you might hate it and supply us all with some fantastic rants. Either way someone wins, though admittedly not always you. :)

      • Zygo says:

        I have this question too. Assume for the moment that we all agree that git is awesome, and explain why to use Github and not Gitorious? Or why not install gitweb on your own web server? Or why not just upload the git repo to any random HTTP server and write a one-line blog posting with the URL and an email address where git can be configured to send its patches? OK, two lines.

        Git can automatically follow all those options if you just tell it the URL, so the only time you'll ever notice you're hosted on Github is when you use the UI and hosting service, or when said service spams you with social notifications.

        As far as I can tell Github is to code collaboration what Facebook is to blogs: the UI is missing obvious features I'd expect to be there, and even the features that are implemented won't return more than 1/3 of the information I know is in there. It seems to be very popular among the sorts of folks who like to spam their friends for free.

        • jwz says:

          People, please.

          This blog post is about OpenGL and XScreenSaver, not a referendum on the merits of github.

          If you want to argue with each other about github, please go do it somewhere else.

          My lack of interest in this conversation could only be described as religious in intensity.

  2. UILabel is a UIControl, so you can handle touch events with it. Do something like the below, and make a method that handles the tap by calling [[UIApplication sharedApplication] openURL:/the URL/];

    UILabel * myLabel = /whatever/
    myLabel.userInteractionEnabled = YES;
    UITapGestureRecognizer *tapRecognizer = [[[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(urlWasTapped)] autorelease];
    [myLabel addGestureRecognizer:tapRecognizer];

  3. Lun Esex says:

    I read "I have ported XScreenSaver to the iPhone" and all I can think is "...pray that I do not port it further..."

    As for one of your iOS-related questions:

    The iPhone "Finder" equivalent is called the Springboard, and you can't get a screenshot of it without running jailbroken code.

    • Except, you know, by taking a screenshot. http://lmgtfy.com/?q=ios+screenshot

      (Me, every once in a while I have to go and prune all of the springboard screenshots out of my camera roll, as I inevitably end up taking one when I am actually trying to shut the phone down.)

      • hm, live by the fast snark, die by the fast snark: actually you're probably talking about taking a snapshot from within another app, in which case...

        ...I'll show myself out.

      • The user can take a screenshot of the Springboard (or anything else that shows up on the screen, more or less). XScreensaver (or any other app) cannot. XScreensaver (on desktop systems) can use screenshots of the desktop as a base for various image-manipulation hacks (melting screens, for example).

      • Lun Esex says:

        Let me clarify. The questions above are all about writing code for a third party app. In the context of doing this to get a screenshot of the Springboard, the answer is "no, not without running jailbroken code."

        I could have said "just tell your user to press the Home button to back out to the Springboard, press and hold the power and Home buttons simultaneously to take a screenshot, then re-launch your app. Then you'll need to write some code to get the user to navigate through their photo gallery to select the screenshot that was just taken. To be nice you can give them a one-button option to just choose the most recent image in their photo gallery, like a lot of Twitter/etc. clients do."

        But that would be the wrong answer.

    • Pelvis says:

      You can get a screenshot of EVERYTHING. Just press the home button and power button at the same time.

    • popler says:

      Of course you can, just press home and lock buttons at the same time!

  4. Jake Nelson says:

    That's some glorious Mad Science right there. I do hope some of the Wrong People have their noses rubbed in this.

  5. Awesome work! Totally agree about the idiocy of the API changes.

    There's glTexImage1D and I'm not sure how to simulate that with glTexImage2D.

    Create an X-by-1 texel 2d image, and use texture coordinates of (u, 0).

    There's no glPolygonMode with GL_LINE, so I don't see an easy way to implement wireframe objects with hidden surface removal. Maybe rendering them twice with glPolygonOffset?

    Render the faces using a black texture with a coloured border. Front faces will occlude the hidden surfaces, and the borders of the texture will colour the edges.

    • The artifacts along the polygon edges will surely only look a little less nice than a real rasterization of a line segment, right?

      • jwz says:

        It's not out of the question that the GPU uses texture primitives to rasterize line segments anyway. Haeberli wrote about that in 1993.

        • Given the choice between J. Random App Developer (whose priorities are getting it out on time and looking tolerable, usually) and GPU/driver "engineers" (whose priorities are to help sell more units of their particular fast parallel processor than their competitor), I'm really not sure who I trust less to implement that well. Still, that was a fascinating read: thanks tons.

      • The main artifact will be from MIPmapping, as the borders thin out and disappear in the lowest levels due to the averaging when generating them, resulting in the lines getting thinner as objects move away. To sort that out, you would want to explicitly generate the MIPmap with the same border width at each level of detail, so it's a larger proportion of the texel population and the smallest MIPmaps will be just the border colour.

        It's how some systems do wireframes, and it doesn't look bad.

  6. ryanlrussell says:

    Ooh, a SpitePort. I love those!

    • Zuvembi says:

      Spite Driven Programming is about half of my working day.

      "No, fuck you! You broke the build. Fix your goddamn code!" - I wrote a Continuous Integration Server because of this.

      "Jesus christ! That shell script only has two parameters. How do you put your pants on in the morning?" - I wrote a Staging repository / web-app for that one.

      "Fuck me. You can't even do svn switch right, what is your major malfuction? And I fucking dare you to put 'initial import' again (for the 72nd time) for the comment." - pre-commit checks

      "God damn it! If you tell me it's a merge problem one more time without at least looking at the code change first, I'm going to go over there and defenestrate you." - Source control quick search webapp

      "You didn't review any of that code you lying scrote weasel. Time for five minutes with a live octopus down the pants!" - Continuous Code Review software

      "I fucking hate you so so so much Kwality Center. I hope someone got primo lap dances and scotch to buy this piece of shit." - Database munging to actually make our bug database work in a browser

  7. Anonymous says:

    To handle sphere mapping, and pretty much anything else non-trivial like that, OpenGL ES expects you to write a shader.

  8. Jesper says:

    All of this is neat and I don't mean to be ungrateful, but the kindest way I'd describe the iOS UI code is "non-idiomatic".

    Yes, in a way, whatever solves the problem is good, but there are a lot of contortions in the code to solve problems that don't arise when you do things in the conventional, recommended and sometimes expected way.

    For example, I've never seen anyone run into a race to select the table view cell that they might just have created that may not be fully around yet. On a small scale, you can just tell the table view cell to look highlighted and that's that; on a larger scale, you're supposed to use highlighting for temporary "you're-currently-tapping-this" tracking and not for selection in the traditional select list sense. (The way I'd do it, I'd have it be a modeless list as usual; tap on the row to run, tap on a blue arrow thing at the edge to configure.)

    Manually performing the action of a button or counting taps are also fishy. With the button, that's done because the run button code is in a custom navigation controller and the table view controller is separate; I'd stick the run code inside the table view controller since it's the concern of the table what sort of data it handles. You are making it so that they have loose dependencies on each other, which makes it more complicated than it has to be, certainly harder to follow and maybe creating weird memory semantics.

    And instead of counting taps, I'd use a UITapGestureRecognizer set to two touches (so double tap) added to each table view cell, which keeps track of whether you've fulfilled the gesture, i.e. double tapped. This sure as hell isn't obvious since tapping something doesn't sound like a gesture, but UIGestureRecognizers are necessary because they can coordinate between each other - you can manage that one requires another to fail and the system can sort out the messy details of trying to match behaviors in a consistent way.

    I have some free time coming up. Maybe I'll work a little bit on this.

    • jwz says:

      I absolutely do not pretend to know what I'm doing when it comes to building UIKit or AppKit UIs.

      It's only my second UIKit app, and I had some weird constraints like being unable to use Interface Builder since my preferences panels are generated from very high level descriptions in XML files. Sample code isn't especially easy to come by. Also it was largely a port of what was only my second Cocoa/AppKit app, and as we all know, the "AppKit → UIKit" debacle is Apple's own entry in the "fuck the APIs, fuck the users, you rewrite all your code now" sweepstakes.

      Anyway, I'd be happy to take patches that make it do things in a more sensible way, but mostly I feel like it works well enough. There's a list you can select savers from, and the preferences are all hooked up. It's not pretty but it works.

      Maybe you haven't noticed it yet, but the grossest thing I did in there is how I used table rows with checkboxes to simulate option menus, sometimes several of them in different sections of the same table. Look at the XMatrix settings, for example. I did this because originally I had used UIPickerView for that, which I gather is the party line, but then I realized that I hate both how that control looks and functions with the heat of a thousand suns. So I rolled my own.

      • Jesper says:

        For what it's worth, I don't have any problems with the automatic UI generation. Polish issues aside, that looked like it worked well. It's the rest of the thing that feels weird. And I think it's entirely appropriate that Apple didn't try to shoehorn iPhone and iPad UI into the AppKit mold since the interaction models are different and any number of things in OS X will be irrelevant on iOS and vice versa. UIKit is great in that they found a few simple reusable abstractions that really hold and are easy to get your head around, which is the polar opposite of, say, Cocoa Bindings. (Yeah, there's a reason that's not there.)

        • jwz says:

          And I think it's entirely appropriate that Apple didn't try to shoehorn iPhone and iPad UI into the AppKit mold since the interaction models are different and any number of things in OS X will be irrelevant on iOS and vice versa.

          That's an interesting theory, and that's surely the party line trotted out by whatever bright-faced Second System Syndrome folks inside Apple who said "We're gonna re-do all that NextStep code but this time we're gonna do it right!" but as my rebuttal to this position I present to you nearly the totality of XScreenSaverConfigSheet.m.

          A god damned Label is a god damned Label. A god damned text-entry field is a god damned text-entry field. You look at the #ifdefs in that file and you tell me with a straight face, "Yes, it is good and proper and sensible that there are two completely different ways of setting the text that goes in a label. It is important that NSLabel and UILabel share no method names because the "interaction model" on a phone is so vastly different from the interaction model on a desktop."

          I'm sorry but that's a load of shit.

          It is the way it is not because it had to be that way or because it makes more sense for it to be that way but simply because whoever was in charge of the rewrite didn't give two shits about compatibility.

          • It's possible you are 100% right, and the expressed purpose of making them totally different was to force software developers to completely rewrite their software from scratch instead of letting them do lazy ports.

            • Yeah, because discouraging developers from porting software to your new platform is totally a good plan.

              Wait, what?

              • Well, they are now the richest corporation in the history of humanity. So yes, it was a pretty good plan.

                • Do not confuse "Apple got away with bullshit" with "bullshit is a good idea".

                • Also you, and a lot of people in this thread, seem to be conflating "develop new software" with "port extant software to a new platform". Those two things are not the same thing. The reason that one should still support extant APIs, but provide additional features in the newer versions of software implementing the backend for those APIs, is that software that works should continue to work.

                  If what JWZ were doing here is "write a new app to make pretty things happen on an iPhone screen", then it would be completely appropriate to suggest that he just go use the OpenGL ES framework and that he use the UIKit stuff.

                  But, and this is very important, that is not what he was ever doing. He was porting completely functional software to a different platform. It is reasonable to expect some differences, but it is not reasonable to expect "rewrite all of your graphics rendering code" and "#ifdef all of your UI interface calls for what is, at the core, the same fucking OS".

                  • I am not making any judgement about whether what Apple did was /right/ or /ethical/ or /good for developers/. What I am saying is that Apple's obvious and explicit strategy and desire is that iOS should not contain any ports of existing software, and they have proceeded to put as many barriers up to doing that as possible. This strategy appears to have worked out well for them financially. They may indeed have pissed off a lot of developers in the mean time, but please note that developer happiness does not directly result in riches.

          • Jesper says:

            This is interesting.

            You are taking two of the things I had in mind, mixing them and then pretending that's my argument. One half is that NSWindow and UIWindow probably don't have to have the exact same API since the chrome is totally different. (UIWindows don't even have titles, for heaven's sake.) The other half is that what they share should work the same. For Fountain, this works fine since it's practically identical on both platforms.

            However, I think Apple's sinned worse than not having the label control in iOS actually be a specifically configured text field whose way of setting text is "setStringValue:", defined in an abstract control superclass. I thought this was the kind of shit that sent you into apoplectic rage, and that getting rid of it would trump having to match and preserve mistakes made in 1987.

            What's more, solving this equation just for UILabel neatly - a word not to be mistaken for intentionally - avoids how in iOS, many controls actually being constructed of other controls. UIButton doesn't draw its own text, it has a label and defers to it. That's a different, and saner, model than having a shadow hierarchy of controls and cells and having lots of API to patch the two together. That's an actual improvement that makes API compatibility hard if not impossible. And this is not just one thing, it influences all of the interfaces of all of the controls. Let's take how Core Animation, which you may not care about, but which handles the rendering, would flip at having to follow the control-cell model and either be a pain to program or make your phone run slowly (or both).

            A label may be a label, but you're smart enough to know that the complexity to do something worthwhile goes deeper than what they named the text property this decade. I think they knew what they were doing because they were building the whole stack and knew what had to happen for the layers to not step on each other, and I think they made the right decision letting cross-compile-compatibility for the UI with OS X be the lowest priority instead of the highest.

            • Jesper says:

              …" For Fountain, this works fine since it's practically identical on both platforms." WTF. Foundation.

              Also, since I buried this somewhat, the point is that so much is different that just ifdefing leads to something very painful. Cross-compiling UI code is only incidentally possible, but if you stick to portable model objects and inner workings and platform-specific UIs (where a cross-platform UI is itself a platform; see Swing, Qt, XUL), that works well and doesn't have to submit to the lowest common denominator.

              And no, my point isn't about shaming you or other people who haven't done this perfectly right out the gates, but that UILabel having different property names from NSTextField is the least of your worries if you actually have to worry about maintaining a project that has to provide UIs for both OS X and iOS.

              • nooj says:

                > UILabel having different property names from NSTextField is the least of your worries

                Exactly! They can't even get the easy stuff right.

          • Because of the platform differences, I get to Google search two documentation sets, following links that break every year. I wish UIKit would become the next version of OS X. People always respond in a panic that touch interfaces wouldn't work on a desktop, but UIKit already supports multiple interface paradigms in one project, for iPhone and iPad nibs, and it provides an API for querying which paradigm the app is running on. A UIWindow in a Mac nib could look like a standard floating desktop window, with smaller mouse-friendly controls. I just want to be running on the UIKit stack so I can reuse my damn code.

    • jwz says:

      Thanks for the DetailDisclosureButton suggestion, that does feel much more sensible.

  9. David Glover says:

    The wikipedia OpenGL article is quite interesting. Direct mode was removed in OpenGL 3.1 as well as the "ES" variant.

    • jwz says:

      OpenGL 3.1 is basically a back-port of OpenGL ES to desktops. Its what you get when an already-not-very-bright child mates with its grandparent.

  10. Gryazi says:

    As someone who knows someone who wasted way too much lifetime standing too close to a spectacularly failed mobile startup (hint: starts with 'A'), I know what they were thinking:

    "We must save every byte possible because mobile devices will have 16MB RAM forever." ("And if anyone really needs to port legacy code, let only those apps link in a wrapper.")

    I have sympathies to the extent that this sort of made sense if you wanted to deploy circa 2003, when nobody wanted to repeat the mistake of Java Applets etc. (software requiring more RAM than any system actually had at the time). Resources really were monetarily expensive back then, even up to the launch of the iPhone 1.0. But clearly the brokenness is in continuing to support only the embedded subset on hardware that now exceeds the specs of workstations at the time the spec was written, and nobody bothering to cough up said wrapper until you.

    At the time, there was a lot of arguing over ES vs. "full" GL. I forgot about it until you jarred it loose. See also Sun's Java marketing for the malpractice of naming every possibly-related-but-not-compatible-thing with a two-character 'edition' suffix rather than names humans could keep track of.

    Typing this response on a more successful but conceptually-similar mobile OS that starts with 'A', which started life about 400% larger than the seeming target for the failed/semi-vapor Elate-based thing I'm thinking of, which forced me to paste this into a mail-app-as-notepad when I accidentally got scrolled back in the entry box and stalled for GC while doing it.

    • Gryazi says:

      PS: I think the deal was that hardware vendors were not going to get involved in assisting with anything other than the 'intended to match the realities of hardware' ES 'standard'. So if you were developing a mobile OS and wanted to link to vendor code rather than rolling your own... well, surely someone will write that wrapper if The Market Demand It, right?

      Then the IT equivalent of the Challenger disaster continued to sink in and 'mobile' got put on hold for nearly a decade.

    • Gryazi says:

      Also on reread, I have to add 'and flash!' to the assumed memory restrictions. Not link it? Just having it around in that 16MB 'ROM' was going to be a hardship, so... at least it didn't become 'optional' in the spec (aka 'not having a spec).

      Of course the luxury of having memory to waste on useful things like security and stability these days sure is nice. We pretty much managed to dodge having to put up with a "Windows 95"/guru-meditation era on smartphones.

      • Tarantella says:

        Dodged? Obviously, you've never owned a circa-2006 Blackberry or any of the feature phones that could try to run mobile Java apps. Then again, Windows 95 usually had better failure modes than those, and Guru meditations provided more useful hex digits in the diagnostic error message that informed you "it's time to pop the battery out of the back of the phone now", so you could be right.

    • Chad Page says:

      Ironically, a lot of dodgy design decisions in Android were to save memory, and now Android eats RAM like crazy, anyway...

  11. Richard Fine says:

    Well, now that you've implemented it, you can see exactly what the spec developers were trying to avoid: forcing driver developers to implement all that stuff, thereby saving on code size (and maintenance cost).

    Remember, the iPhone is actually very high-end relative to the range of hardware they were planning for. Initially it was running on feature phones - not smartphones. Port XScreensaver to a feature phone with a 50Mhz ARM chip and 1MB of memory, show that immediate mode is just as good as buffered mode, and then you might have a case.

    • jwz says:

      A driver is not a library! An API is not a driver! libGL.so is not a driver!

      I can understand how someone who designs GPU chipsets might be lost on the distinction but I would hope I was among people who understand software, here.

      What part of my phone's "OpenGL driver" is "jwzgles.c"? The answer is none. None part. The same part that it would have been had it been included in the system's libGLES.so.

      Also the armv7 .o file is 107 god damned kilobytes of compiled, unoptimized, unstripped code.

      Remember, the iPhone is actually very high-end relative to the range of hardware they were planning for.

      Please. Pieces of xscreensaver were written by me on a Sun 3/60, compared to which a Treo seems high-end.

      • Gryazi says:

        As the representative of the peanut gallery, http://dri.freedesktop.org/wiki/libGL makes interesting reading for being forced to learn what code actually has responsibility for what. That's only .org/DRI but Wrapping Is Hard when you can be lazy and the main side effect is 'new software runs fast'.

        I think I remember the developers of the rewrite of the 'Classic' version of said OS had to deal with this in similar fashion (and as a game-porting shop they surely would want to), but by 2011 they're apparently able to leverage Gallium over there and presumably that means... something, and that any stopgap measures are ancient history.

      • Richard Fine says:

        What are you talking about? Of course it's on the phone. libGL.so is an import library, no? Otherwise every app that used GL would be carrying a duplicate copy of the driver, and that'd be nuts, both from a space point of view and from a device-independence point of view. (Not that device independence was really realistic when ES was first around, but still). Does your iOS app contain a complete copy of every framework you linked to?

        I'm sure you have implemented XScreensaver on lower-spec platforms, but if you want to demonstrate that implementing immediate mode on top of ES carries no performance impact, doing it on an iPhone is not a strong test - certainly not strong enough to justify calling people idiots, in my eyes.

        (And let's be clear, I have no particular love for the GL orgs. Their handling of 3.0 was insanely bad. But I think you're criticising the decisions behind ES without fully taking into account the historical context in which those decisions were made).

        • Emil says:

          First of all, stop conflating "library", "driver", and "framework". Second,

          > if you want to demonstrate that implementing immediate mode on top of ES carries no performance impact

          That's not really the issue here. Let's assume modern hardware runs significantly slower with immediate mode. It's still worlds better to have your legacy code run slowly than not at all. The Wikipedia page on OpenGL opens up with

          OpenGL serves two main purposes, to:
          [...]
          * hide differing capabilities of hardware platforms by requiring support of the full OpenGL feature set for all implementations (using software emulation if necessary)

          Some software emulation sure would be nice right about now...

          • Alex says:

            > It's still worlds better to have your legacy code run slowly than not at all.

            Debatable, I suppose. Apple disagrees, which may or may not make them evil. It's pretty much self-evident that Apple wants all code that runs on its devices to be carefully tuned for those devices. This probably does make the user's experience on an iPhone more enjoyable, though if you think that maintaining a separate codebase for Apple devices harms others more than it helps Apple you might think that is bad for users in the long term.

            All I'm saying is that it is demonstrably not bad for users to have all their programs snappy and rewritten by hand. If Apple's decision was "evil" or morally wrong in some way, it is because their decisions affect other platforms' users adversely, not because their decision made the iOS ecosystem weaker.

            Developers are just a means to an end: happy users. Saying the decision was a bad one because it makes life more difficult for developers is missing the point entirely.

            • Lun Esex says:

              Developers are just a means to an end: happy users.

              Blasphemy!

              Don't tell Monkeyboy "Developers! Developers! Developers!" Ballmer that. Or Sergei and Larry.

        • A driver is not a library! An API is not a driver! libGL.so is not a driver!
          Sticking this in blink tags since you don't seem to notice it otherwise.

          • Richard Fine says:

            I know all those things. Maybe instead of assuming that I can't read, you should assume that I'm mistaken as to the roles these bits are playing. By my understanding,

            * OpenGL (ES) is a specification for an API
            * The API is exposed by libGL.so
            * libGL.so does no substantial work, but passes everything on to the driver
            * The driver and the hardware do the rendering

            The bulk of the tech that JWZ implemented - building immediate-mode calls up into buffers etc - would be implemented by the driver.

            • Big says:

              I haven't (nor do I intend to) read jwz's code here, but I'd bet a round of drinks that 100% of the code in question just listens for (old) API calls and bundles them up into (new) API calls. If Jamie _did_ sink down into the driver code, he's even crazier and more pigheadedly stubborn that I've up to now assumed (attributes which, I'd like to point out, I quite admire).

              • Richard Fine says:

                Sure, I doubt Jamie's implementation was at the driver level; but if the OpenGL ES Spec called for immediate-mode support, and NVidia and co were implementing it, I would expect them to put it in the driver.

                • Big says:

                  Did we not both read the same rant? The one up there that says:

                  "They eliminated "immediate mode" in favor of a syntactically very different -- yet functionally equivalent -- way of doing things: instead of calling glBegin and glVertex, you are expected to put all of your vertexes and normals into an array and then call glDrawArrays on it, to draw it in one fell swoop. (This API already existed, but they made it be the only way to do it.) "

                  and:

                  "Because I've implemented the OpenGL 1.3 API in terms of the OpenGL ES 1.1 API, and it works fine."

                  Because it we _did_, then I've got no idea what you're trying to say…

          • So the bit of code on my machine which happens to build whole GPU command buffers and spit them to the actual hardware isn't a part of the driver then?

            (Yes, really, this is how it works. On Unix, your libGL.so is generally directly shipped by the GPU vendor, FreeDesktop DRI/DRI2 lackluster performance abominations notwithstanding, though they seem to have finally figured out they were doing everything wrong now and actually have things like a in kernel memory manager to make the whole thing work properly. On Windows, opengl32.dll does a sum total of... forward the calls to your driver's ICD (installable client driver), and only does that for those defined by an ancient version of the specification... everything else, you're expected to call wglGetProcAddress to get a pointer to by hand. In fact, pretty much the only platform which doesn't do this is Mac OS X, and presumably iOS, to the continual annoyance of the driver vendors (because what they can expose is limited by what Apple implement in their half), and app developers (because OS X GL is always lagging behind))

            Incidentally, it is notable that Apple's OpenGL implementation on the desktop now offers you a binary choice: Stick with OpenGL 2.0 and continue to get all the legacy features but none of the new shininess... or go OpenGL 3.2 Core Profile without the legacy

  12. tkil says:

    making sure all the savers reacted sensibly to orientation changes

    Speaking of which, iDaliClock doesn't seem to behave in quite the same way as the standard apps (Mail, Safari). In the standard apps, you can rotate the screen to landscape, then place the phone flat on its back -- and the landscape orientation is maintained.

    iDaliClock reverts to portrait orientation, even if it was in landscape before being set down flat.

    Minor, but I noticed it...

    Thanks for slogging through this stuff. Wish I had more time and energy (and money, thanks Apple) for iOS devel.

  13. Cris says:

    No, it is YOU who should "get out of the software industry right now". You are just an old man whose loosing touch.

  14. Sol_HSA says:

    Howdy. I wasn't around when the ES spec was being written, but I've seen much of the development afterwards, so I might be a half-idiot or something...

    - The devices OpenGL ES was designed for never really existed; the CPU power, memory and storage capabilities have grown WAY more than anyone really expected. There was even a fight over whether you should be able to use anything else than bytes for UV coordinates. (hindsight being 20:20, yes, sounds idiotic). Had you tried to port the xscreensaver to a device that ran the common-lite profile of OpenGL ES 1.0, you would have had a lot more problems than you had with the supercomputer that iphone is.. =)

    - OpenGL is horribly bloated. Implementing the full set of features from, say, OpenGL 1.4, is a lot of effort (hardware support, implementation, drivers, testing, conformance, blah blah), and there's plenty of features hardly anyone uses. The point was to take exactly what's needed and nothing more, and in many cases I believe they included stuff that should have been cut. (The same process went on with ES 2.0, and there's some stuff there as well that should not have made the spec).

    - Personally, I believe there should have been a open-source liberal license "GLU"-kind of library that does exactly what you've done, from get-go, that re-implements the immediate mode stuff. (As to whose responsibility it would have been and whose resources to spend on it is always a good question..) It's not performant (compared to doing things "right"), but in my experience 9 times out of 10 you don't actually need that bleeding edge performance anyway, and glBegin..glEnd is far easier than setting up VBOs for drawing two triangles..

    - On those lines, I believe there should be an open-source OpenGL 1.x implementation over OpenGL 2.x, for same reasons - this would free the driver developers from a lot of (unnecessary) work. And again, whose responsibility would it be, and who'd throw money at it?

    - Finally, the design of OpenGL, and OpenGL ES, is a result of committee work where all participants have their own (hidden) agenda to push. This leads to some of the more "interesting" results.

  15. dinatural says:

    Thanks JWZ for this release! It's great to see those legendary screensavers on an iPhone right now.
    Just a little comment, maybe it's my XCode that's effed-up but while building XScreenSaver-iOS, it complained in molecule.c that it couldn't find "molecule.h",so had to comment that out to build it successfully (but molecule wasn't working but all the rest are! very impressive!)

    I was wondering, was it the impending apocalypse at DNA Lounge that made you fall in code again? Just a thought...

  16. Too damn right!

    http://williamedwardscoder.tumblr.com/post/14011115100/opengles-i-want-my-gl-quads-back

    I used to know a few Khronos spec chaps and they weren't out to fight the 3rd party programmer's corner; they were out to deprecate anything their GPU architecture couldn't do well without driver effort

    • datenwolf says:

      Quads are problematic, because there's no stable way to split them down into triangles. Which diagonal do you use?

      Quads are problematic, because they may be concave (all it takes is not all four vertices being coplanar).

      • nooj says:

        they don't stop being problematic because you forced the user to do the work. implement it once upstream so we don't all have to solve the same problem.

        • datenwolf says:

          But there is universally applicaple solution that could be implemented upstream. Modern GPUs can only process triangles. So quads have to be tesselated down. So tell me: How would you tesselate the following quad:

          -1,-1,-1
          1,1,-1
          1,-1,1
          -1,1,-1

          There are two possible solutions. And as a programmer using an API I want the result to be predctible. OpenGL never specified what would happen in this situation. Actually if you were to draw such a quad, the results are undefined and may be anything. So the problem never had been solved upstream.

          If however you remove the possibility to draw quads (and polygons), you're forcing the API user in thinking about the problem. The result is, that the programm sending the geometry has to take care about this itself, which is, yes, more work. But on the upside the outcome of this process is a) reproducible and b) stable!

          Quads and Polygons have not been removed because the ARB hate people, but because it saves a lot of headaches – on both sides – in the long run. For example very early in writing my first 3D game engine, that was when not even OpenGL-2 was out, I specified that quads must not be used. Every model file importer did tesselate quads into triangles. And not only two triangles, but into four triangles, with a common vertex in the barycenter of the original quad. This solution was much more stable than whatever any driver could have done, because I exactly knew what was going on.

          OpenGL is not, never has been and never will be for the lazy.

          • nooj says:

            very early in writing my first 3D game engine, I specified that quads must not be used

            this is the point. you started out by saying "don't do this" and you kept to it. all is true and right with the world.

            opengl started out by saying "hey, quads are really handy!" and then the update to that (or the completely new api that users were effectively forced to migrate to) said, "quads? what quads?"

            also:

            there is no universally applicable solution that could be implemented upstream.

            so? maybe it is better for the user to figure it out. but give them a risky alternative!

            users can work around a whole lot of bugs, unstable implementations, etc., just so they can get their work done. but as a non-user (let's don't get into the driver/api debate), take all the cumulative time users will spend working around a soon-to-be-broken implementation, and spend at least the log of that time trying fix the break.

            it's a little arrogant to say you're doing an unmitigated good by forcing the user to think about something. who are you to decide what the user should think about? for most applications, it doesn't matter that the results are undefined. it doesn't matter that they're not stable. the software industry (and every other industry, too) is rife with examples of nonstandard, officially undefined stuff that everyone kinda settled on.

            every single upstream implementation could be different, and yet i bet overall it would still be fine, because user cases don't change implementations often. as long as 1) the upstream side does something reasonable, and 2) whatever gets done allows the user to accomplish their task, it doesn't matter that the non-user-side is a small tower of babel.

            i think we've both made our points well. thanks for listening.

  17. Nick Thompson says:

    I don't know the history but I suspect that the "OpenGL" in "OpenGL ES" is like the "Java" in "JavaScript". It's a marketing position ("competes with DirectX").

    Similarly, the main thing "OpenGL ES 2.0" has in common with "OpenGL ES 1.0" is that they match the same search terms.

  18. anonymouse says:

    So, there was actually a kind of logic behind OpenGL ES and GL 3.1 and trying to get rid of immediate mode. The problem is that immediate mode really is gratuitously slow, not necessarily because of the hardware, but because of all the software work that needs to be done for each vertex if you do things that way. You have to check if you're in a Begin(), you have to check a bunch of other state, and then you have to write some bytes to the hardware. Now, you'd think that this gratuitous slowness would encourage the people who actually care to rewrite their software to use newer, more efficient APIs that are available in newer versions of OpenGL. But you'd be wrong: a lot of the software that both uses the old, inefficient APIs and cares about performance is old CAD and other such specialized software. The people who wrote the original IRIX version and know how it works are long-gone, and the people maintaining it now know just enough to have ported it to Linux. So when these software vendors complained about performance, and the graphics card vendors told them to use newer APIs, there was a lot of complaining, and the graphics card vendors made the mistake of trying to optimize immediate mode instead. Which they've done with considerable success, but also heinous hacks. Think libGL mucking about with hardware page table entries. The GL ES and GL 3.1 effort at removing immediate mode was an attempt to go back and force the issue, making those stupid old CAD programs use the newer better APIs so that those terrible immediate-mode-optimization hacks wouldn't be needed anymore. So the motivations were not entirely unreasonable, but this mess really shouldn't have existed in the first place.

    By the way, the sort of thing you did here isn't entirely without precedent. I know of OpenGL ES 1.1 implementations that basically just map to an ES 2.0 implementation, because that's what the hardware actually supports directly.

    • KK says:

      Most of the SW load with gl calls comes from the fact that the OSes are multithreaded / multiprocess. Each and every gl call needs to obtain a mutex, fetch the current graphics context from TLS, only then do the above processing, and finally release the mutex. (Conceptually, in practice the way to implement thread safety / inter-process communication naturally depends on the OS.) The GPU's can do triangle setup in something like 6 clock cycles - the overhead of a single glVertex function call is thousands of clock cycles. So the GPU is just starving for data, as all the CPU cycles are spent in feeding in the vertices at a very slow pace.

      A wrapper can actually handle old code more efficiently than a full-blown driver, as you can go single-threaded and don't need to care about handling all possible error situations. So congratulations for jwz - what you've done is exactly the correct solution here. But so is deprecating glBegin/glVertex/glEnd etc. They are just harmful, and having them in would just result in wrong programming practices with the majority of developers. If you google for opengl tutorials, you'll get tons of examples that use the bad practice, and it's obvious that if the features hadn't been removed, majority of beginning application developers would still be using them. And complaining constantly about performance problems.

      So the problem is really that original OpenGL specified this the wrong way. OpenGL ES committee finally dared to fix it, as there were no requirements for supporting legacy applications back at that time.

      • nooj says:

        > OpenGL specified this the wrong way. OpenGL ES committee finally dared to fix it

        And it's sure a good thing they did it right that time! Because really, the mess got a lot better after that, and stayed better.

  19. Rogero says:

    wow, the code quality speaks for itself. I think I know now why netscape failed ...

  20. Jay Vaughan says:

    Nice work jwz! I believe your efforts will be very much welcome in the Pandora Handheld Gaming scene, where a lot of work has been done over the years to port OpenGL games to OpenGL/ES, and I look forward to investigating the utility of your shim layer in that context - maybe we will see some amazing ports of OpenGL games to the Pandora console as a result.

    One thing: I downloaded xscreensaver-5.16.tar.gz, untarball'ed it, opened up XCode, selected XScreenSaver-iOS target, and hit the build button. Instant failure: missing m6502.h file, references from m6502.c. I looked all over, it seems like its missing from the archive for some reason. Just thought you should know.

    • jwz says:

      Let me know if it's useful!

      That and molecules.h are generated files. I guess Xcode isn't building them automatically for some reason. Try building the "m6502" and "Molecule" targets in Xcode first. Failing that, configure and make in the shell should build them.

      • Jay Vaughan says:

        Okay, I selected the 'm6502.h' target (didn't see it before, that list is loooong...), built (it gen'd the file, all good), and then the Molecule target, and it also built molecules.h. So, a little build hand-waving is necessary, but nothing major.

        But now when I hit the XScreenSave-iOS target, I get as far as sonar-icmp.c - "netinet/ip_icmp.h' file not found. Hmm .. Not sure thats available on iOS.

        (And yes, I'll let you know if the Pandora guys find your work helpful to boot up GL ports to AngstromOS... The thread I started about it is here, in case you want to pitch in yourself or keep one of your hairy eyeballs on the case: http://boards.openpandora.org/index.php?/topic/8870-has-jwz-solved-the-opengl-problem/ There are some great GL / GLES hackers in the Pandora scene, maybe there will be some synergy with this work .. at least, XScreenSavers can be ported to Pandora now, heh heh ..

        • Jay Vaughan says:

          Oh, never mind, I should read the code (comments) before I get all uppity to report problems .. Have successfully built XScreenSaver for iOS and am now ready to dig in and have a bit of a play with your GL shim. Thanks again jwz!

        • jwz says:

          Yeah... that one's a pisser. It works in the sim but not in the real device until you fix the headers. See my comment at the top of sonar-icmp.c for how to fix it.

          • Jay Vaughan says:

            Thanks, fixed. I'm having a blast going through all the savers now, it really looks like you nailed it man. I'll be taking a closer look at libjwxyz/jwzgles.[h,c] in the next day or so and see if there is some way to use it to port F/OSS OpenGL games from Linux (x86) to Linux-angstromOS/PandoraOS .. it would be very intriguing if in fact this ends up being a productive way to do such a port, because there is much demand in the Pandora scene for some great 3D games, and they are out there ..

        • John says:

          In Xcode, for the "XScreenSaver-iOS" target, select the Build Phases tab and add them as two Target Dependencies. Then if you run clean and rebuild it will rebuild them first if neccesary.

          • jwz says:

            In Xcode, is there a way to add a dependency on a .o file rather than on the target as a whole? That is, in Makefile terms I'd rather do:

            molecule.o: molecules.h

            than

            XScreenSaver-iOS:: molecules.h

            • John says:

              Not that I've ever seen. I think it has to be a Target. Xcode, like most Apple nonsense, tries to hide all the advanced compiling action from the user to just give them the end result. Their may be, but I'm unaware of where to set it if it exists.

  21. datenwolf says:

    Your ramblings would be fine and square, weren't they so misinformed.

    First of all OpenGL-ES is not OpenGL. It's a completely new API and full code compatibility was never the goal. The intention of OpenGL-ES was to have an 3D graphics API that could be painlessly implemented even on very small scale systems.

    Second: You completely confuse fixed function pipeline with immediate mode. Fixed Function pipeline is that heckuva mess of state switches of OpenGL-1.x where you're coding your ass off, just to get all the parameters right for thing to look okay. And in case you missed setting some parameter and have some plugin or other part of your program you add, set some state that interferes with your other rendering code you have to add another bunch of glGet… and matching state setting calls to get it back to a sane state.
    Add to this, that since about 2003 GPUs are no longer fixed function. They're freely programmable. Sending a small snippet of code, that precicely controlls all the vertex transformation and fragment generation steps is a lot easier, than keeping track of all the mess OpenGL-1.4 state has become.

    Third: Immediate Mode had been deprecated ever since OpenGL-1.2 was around. The Red Book (OpenGL programming guide) 2nd edition clearly states this to everybody who reads it: Don't use immedaite mode, unless you want to create Display Lists.
    Also Immediate Mode always was a high degree performance PITA. You can't effectively do DMA transfers with immediate mode. You have a ton of context switches. The only reason for Immediate Mode to be there was, that back in 1992 SGI did model the OpenGL after the command set of their graphics processor modules, and each OpenGL call did match 1:1 to a GPM opcode. Immediate Mode is something you do not want in an embedded system; the embedded system GPUs are tile based, they need whole primitive batches to operate efficiently. All the glVertex calls would first have to add to an array (you can't use a linked list, because those don't transfer well over DMA). So how big do you make your buffer? Oh, and how about mapping GPU memory into your process address space directly to safe that additional data copy? But with OpenGL-1.1 they introduced Vertex Arrays, to be able to process batches. That was 1996. If you're still using Immediate Mode today, then you are the moron. You had over 10 years to learn, and port over your code. Heck, even in OpenGL-3 (which is actual OpenGL, and not the embedded API never aiming to be compatible to it) you can create a compatibility profile and keep using the old stuff, though this is pure madness; you'd have to be a major masochist to do this.

    When OpenGL-2 was in the making, the major version bump was seen as an incentive to get rid of all the old crud (Display Lists, Immediate Mode, but Fixed Function was still there). Eventually it happened with OpenGL-3, and no day too soon. Today the GPUs all over Desktops, Workstations and Embedded are very different to what the OpenGL-1 API assumes. For example, every time you make a state change in OpenGL fixed function the driver has to generate a matching set of shaders in situ. This is not very efficient. Today GPUs read all the geometric primitives they process from their own memory. With Immediate Mode you first have to build a buffer on the client site, fill it, maybe grow it several times, then copy it to GPU memory, before you can actually draw from it.

    OpenGL-1? Fixed Function? Display Lists? Immediate Mode? Good riddance!

    Instead of writing a compatibility layer, the better method would have been porting all the XScreenSaver OpenGL hacks to OpenGL-2 and removing all the Immediate Mode and Display Lists use. Much more future proof in the long run. Also writing a maybe 15 line set of shaders is far more pleasant, than keeping track and setting several dozens of state variables with at least the double in lines of code.

    • "These people don't know what they're talking about, because the contour of the API has absolutely fuck-all to do with what goes over the wire."

      • datenwolf says:

        Well, if it just were commands going over the wire, then maybe yes. But we're talking about data too. The difference between Immediate Mode and Vertex Arrays is like file access through fputc / fgetc vs. mmap.

        So you want to be able to do DMA transfers. Either you do DMA transfers directly from process memory, or you map some DMA memory into process address space. Preferrably in both directions. Preferrably using a unified API. So you introduce a Buffer Object mechanism, that covers vertices, pixel data and even uniform data.

        Immediate Mode APIs are 1970ies style. Today operating systems and hardware are early 21st centry and work differently.

        • you really /don't/ understand what JWZ has done, if you are still posting comments like this. my attempts to get through to you have failed.

          • datenwolf says:

            Oh I fully understand what JWZ has done. Why? Because I did implement such a compatibility wrapper myself, years ago.

            However such wrappers completely miss the point. There's no longer ang glTexGeni, glTexEnvi, because the mess it is to work with it compared to hacking up a small vertex shader simply doesn't justify to emulate it. Yes you still have the code in desktop OpenGL drivers. For compatibility reasons.

            GL_QUADS have been removed for good reason. You cannot unambigously tesselate them down. And quads have the annoying property of being concave ocassionally.

            It is, BTW, far easier to port OpenGL-ES programs to OpenGL. OpenGL-3 actually has a OpenGL-ES compatibility mode. So if you really want to be portable: Use OpenGL-ES and add full OpenGL features only as needed. Nice side-effect: WebGL compatibiility for free.

        • Zygo says:

          File access through fputc/fgetc is sufficient for running a lot of code today, even though there are half a dozen APIs optimized for various different conditions which do a better job of doing something else. Very few of those alternatives will do things like translate character encodings or line endings for you, or let you deal with characters one at a time without having to worry about whether the buffered I/O implementation you have to write is going to suck.

          The goal here was to port a bunch of teenage graphics hacks, with all their endearing quirky behavior preserved to the extent possible, while porting it to a platform designed by aliens. For that, you want exactly what jwz wrote.

    • I mean it's neat and kind of creepy that you memorized all that and are able to parrot it back, but it doesn't really reflect an understanding of what JWZ has done or why he's done it.

      • datenwolf says:

        Oh, I understand what JWZ did. I also recall JWZ being worked up about certain changes in the OpenGL-2 API a few years ago in a blog post. This doesn't change the fact, the modern GPUs work differently than what was there when OpenGL-1 saw the light. And the changes in the OpenGL API were done to reflect how modern GPUs works and should be programmed.

        The attitude JWZ demonstrates is exactly the mindset that did hinder the development of OpenGL in the years between 2003 to 2009. When shader GPUs got available the ARB was trying hard to implement a gazillion of texture combiner switches to make use of it. Just having a programming language that allowed to express the operation intended was frown upon. Luckily 3DLabs went ahead and proposed GLSL, though is was bug ridden in the first years.

        OpenGL fell in disadvantage compared to Direct3D, because it took so long to reflect the actual state of GPU design. And only after the ARB saw some substantial restructuring and OpenGL fell under the custody of Khronos and was free of SGI ever since, noticable improvements came to be. Today OpenGL (-3 and -4) directly reflects to a certain hardware class' feature set, which is a good thing. I no longer have to worry about what extensions certain hardware does (not) support. I just target OpenGL-3 or OpenGL-4 for whatever needs I have.

        Also: Nobody prevents you from still using OpenGL-1.4. But please don't ask for a ton of clunky extensions to somehow get access to the shiny stuff there, too.

        There are some things JWZ did a great thing forseeing future complications, like the whole mess of X toolkits interfering with the security of a screen locker. However even this could be done a lot more elegant, if X sessions were detachable. Today you can use Xpra (but then you lack a lot of nice things) to detach your session; this is much more secure than having your screen just being locked, because there's actually no session screen there.

        • You've opened my eyes. What a tragedy that JWZ has taken a mere 3 days to undo the years of progress you fought so hard for. I feel for you, really.

  22. Datenwolf you are right, those old calls sucked. However writing a compatibility layer in a separate lib would have been a minor effort, and addressed all concerns including bloating. All books and Apis could say "don't use this pos compat layer, it sucks and was written for shitty software."

  23. jayblanc says:

    This Motorbike has no backwards compatibility with the Car. But don't worry, with some two by fours and a couple of spare wheels, I've made a kit you can use to turn any Motorbike into a Car!

    • That's not the issue. The issue is that this Car 2W has no backwards compatibility with the Car.

      • jayblanc says:

        I refer you to my own comment, and those from all the other people, who have pointed out that OpenGL ES isn't a new version of OpenGL, but an entirely new hardware capability standard that shares some similarities.

        • Yes, lots of people have been saying that, and I'm not disputing that.

          It may be perfectly reasonable to (by analogy) say that OpenGL is a "Car" and that OpenGL ES is a "Motorcycle", so jwz's complaints are equivalent to people complaining that the new Motorcycle is not compatible with the old Car.

          However, the issue is, in part, that the two-wheeled motorized vehicle in question was not called a "Motorcycle", but rather a "Car 2W", and had yielded complaints that the Car 2W is not compatible with old usages of the Car.

          If a choice was made to call it "OpenES" instead of "OpenGL ES", then perhaps not so many people would think it was supposed to be a version of OpenGL.

        • jwz says:

          And I refer you to my comment that that's a silly thing to say.

  24. Perhaps surprisingly, at least in iOS 5 and under, the recommended way to display styled text and things with links in it is indeed to embed a UIWebView.

  25. Jay Vaughan says:

    I think, in the rush to kill jwz's rantbuzz, people are missing a point: why was it necessary to break so much code, in the first place, by killing old portions of the spec? It wasn't necessary: it was decided.

    Because, the OpenGL ES designers didn't want people to write code that way - and they wanted to encourage the hardware designers to make their interfaces available, accordingly. To wed these two camps, sacrifices were definitely made - in favor of the chip foundries, mostly, and with little care for the 'old code' that might not be usable in the new technology. You see, the only way hardware foundries can be convinced to standardize is if they don't have to spend money on driver development. Hardware foundries hate device driver development, it is despised. As close to raw as possible is the general rule, and has been for a long time.

    So, for OpenGL ES, decisions to prune out great chunks of the GL spec were made. There was no 'forward-porting of old code from old platforms' even on the horizon: it was new, new, new, baby!

    This ideology of rapidly discarding old stuff for the new hawt is rampant in our worlds - on one hand, destruction of cruft is a wonderful thing, but on the other hand, its completely arbitrary. There are some great OpenGL codebases out there, which just won't ever run on the new stuff: not because they're not interesting to the user, but because a barrier to their survival as a codebase was imposed upon them.

    Backwards compatibility does not, ever, have to be sacrificed for the sake of forward-thinking future-smart "new shit". Its simply a decision.

    • jayblanc says:

      Because OpenGL ES isn't a new version of the OpenGL spec. OpenGL ES is a different capabilities standard for a different kind of computing platform than the one Open GL addresses. It isn't backwards compatible, because nothing is intended to just be taken over and run on these new platforms. You're not meant to just be able to run your OpenGL apps on OpenGL ES platforms, you're supposed to have to make adjustments to fit the platforms they're running on. They explicitly refuse support for some of the things OpenGL does, because it's not how the platform works. As I said above, the complaint is that a Motorbike is not a Car, and the 'fix' is bolting on to the Motorbike two more wheels and a roof.

      Additionally, OpenGL is not just an API around a software library. It's a Hardware capabilities standard as well. JWZ is wrong to assert that OpenGL is 'just an API not a Driver', it's actually somewhere between the two. Continuing existence of deprecated parts of OpenGL does have impact on the Hardware development side, the API does not and can not just shunt things to software or wrap around other OpenGL calls, that ends up having to be done in the Driver code.

  26. psvx says:

    OGL ES is not a new version of the desktop OpenGL api - it is entirely new specification. Thus, it does not have to be backward compatible in any way with the desktop OGL, it's entirely different project. You have to target your app for it from the beginning, so your argument is invalid, good sir.

    • jwz says:

      As OpenGL ES 1.0 is defined as a diff against OpenGL 1.3, saying it's "an entirely new specification" is clearly false.

      Some people commenting here clearly want to believe that OpenGL and OpenGL ES bear the same relationship as Java and JavaScript: that the similarly is in name only, that it's just a marketing gag. But that's obviously not the case.

      OpenGL ES is a new version of OpenGL. It's just one that made little attempt at backward compatibility.

      You may think that's a good idea, but claiming that one's not a version of the other is a crazy thing to claim.

      • datenwolf says:

        Yes, OpenGL-ES has been written as a diff against OpenGL-1.3. So? A diff can be anything, even a complete, incompatible respecification. If it was fully backwards compatible it would have been an add.

        I'm now actively using OpenGL for well over 15 years. It's one of the few APIs I can honestly say I know by heart. If you look StackOverflow, I was the first person to gain the golden OpenGL tag batch there, and also the OpenGL tagged top user. The same goes for my activities at comp.graphics.api.opengl. I'm not saying this to show off, but just to give you an idea of how much I use OpenGL.

        And in my opinion all those changes to OpenGL, the backwards compatibility broken by OpenGL-3 core, they are good things. The immediate mode, the matrix stack and display lists tempted to use OpenGL as something it never was. OpenGL was abused as a scene graph (tempted so by the matrix stack and display lists), a math library (matrix stack), interactive paint framework (immediate mode) and many more.

        Good APIs can be recognized by making it very hard to do things they were not meant for. OpenGL-2 was a particular bad API in that regard, because it could be abused so well.

        However: OpenGL-3 still has the compatibility profile, and you can do everything OpenGL-1.1 offered with it. So that backwards compatibility is there.

        But OpenGL-ES was targeted for systems, which would may have made it very hard to properly support everything OpenGL has. But by stripping away those things, people finally realized, that they actually didn't need them in their real world applications.

        Matrix Stack? Completely redundant. Most applications do their own matrix math for animation and/or physics simulation. Real world applications used mostly glLoadMatrix and nevery built transformations on the stack. glRotate was almost always abused in the consecutive calls to make Euler angles and be prone to gimbal lock. Matrix Stack gone? Good riddance.

        Immediate Mode? Who needs it anyway? Most applications keep their geometry in some buffer anyhow. So why not simply render directly from that? Actually since the main bottleneck is the I/O between CPU and GPU you want to keep the geometry data in GPU memory. Then Vertex Buffer Objects, used through the Vertex Array interface are the only way to go. Immediate Mode simply makes no sense. It never did. Why was it there in the first place?

        Display Lists? Make no sense without immediate mode. Have been superseeded by VBOs and Index Buffer Objects. And since OpenGL-3 and later thankfully don't maintain as much state than OpenGL-1.x used to, using Display Lists to quickly change state is not so much of a necessity now.

        The whole point of the texture environment got obsolete, when fixed function GPUs became obsolete. Good luck finding a non programmable GPU these days. Texture environment state gone? Good riddance.

        Nobody prevents you from using OpenGL-1.x, it's still there in the drivers. But don't complain about OpenGL-ES not having certain things, because OpenGL-ES was never meant to be backwards compatible to OpenGL. Not being compatible to allow for implementation in resource restricted environments was/is the whole point of ES. Those features stripped away have been removed, so that you actually could have stable 3D graphics on embedded plattforms. We're not talking about just the iPad or iPhone, but also your GPS or other smaller embedded systems, which don't have as capable GPUs like the iPad.

  27. Gareth Rees says:

    I've done something similar (porting a framework originally built on top of OpenGL to run on top of OpenGL ES instead) and I agree with everything you write here. There was no good reason for OpenGL ES not to provide a compatibility layer for direct mode drawing. The ease with which it's possible to implement such a layer oneself demonstrates that. Such a compatibility layer would undoubtedly run slowly but the spec could have deprecated it and explained why, and people who needed the speed could have ported their applications in due course.

  28. So, I've got XCode 4.2.1, and am fairly certain I'm a registered (just registered, not paying) Apple "developer", but when I click "use for development" on on my iPhone, I'm told 'The version of iOS on "iPhone" does not match any of the versions of iOS supported for development with this installation of the iOS SDK, sending me to http://developer.apple.com/iphone/program/download.html ... but you can't get there without paying the $99/year tax. Am I missing something? I mean, I've obviously actually got a legal copy of XCode in that it's running...

    I guess that's in line with Apple's practices, but then why can I build for older SDKs? (Apparently I can build for iOS 4.2, 4.3, 5.0 (9A334), and "Latest" which, it would seem, is a lie.) (My phone has "5.1.1 (9B206)".)

    (Also: I'm shocked that the "Android version" demands haven't come up yet. I guess kicking the OpenGL anthill distracted the trolls?)

    • Frode M says:

      You can't run on the device unless you pony up the $99 and add your phone's UDID (serial number equivalent) to your account in the web provisioning portal. Also you usually need to upgrade xcode to debug on devices when you upgrade the OS on them.

      • Huh. So what's it mean that I was able to successfully click "switch to developer mode" on my phone, and then get the version number complaint?

        (I'm sure it would break at some point trying to build, if I had the right version of XCode, mind, just... seems like that'd be the time to say, "Gimme a Benjamin". Oh, right.)

    • jwz says:

      I will happily accept patches that make it run on Android!

      Note that all of this is written in ANSI C, not Java, so there's your first hurdle. (The latest thread about this was last week.)

      I don't own, and am currently uninterested in owning, any Android devices, so I've never bothered to try and install their simulator. (Well, I did try once, a year or two ago, but it was so difficult that it exceeded that day's allotment of fucks to give.)

      • An actual competent Android programmer (which, alas, rules me out), would port it as a live wallpaper.

      • I was merely observing the Monorail Mind of the Internet Troll. (I couldn't, personally, give less of a shit about an Android version of anything, but I did read that thread the first time, and it seems completely practical, modulo how much standardization one might reasonably expect of handset manufacturers on the GPU side... which is why programming anything "for Android" is such a nightmare, as I understand from my friends who do so for a living. I usually stop listening when they start talking about hardware companies I thought had failed 5 years ago.)

    • PJ Cabrera says:

      When Xcode says "Latest", they mean latest iOS version supported by the version of Xcode you are running. Xcode 4.2.1 supports up to iOS 5.0.

      The current version of Xcode is 4.3.2, which supports up to iOS 5.1. This is going to be superseded by Xcode 4.4 with iOS 6 in the next three months or so.

      By choosing "Latest" in your project, it will automatically pick the latest iOS version supported by whatever version of Xcode you use to open the project. Before "Latest" was added as a feature, you had to manually change the version # in your project when you upgraded to a latter version of Xcode. This was a bit of a pain in the behind.

      • I'm so confused by Apple's distribution model for XCode. I obviously have a "full" version, somehow or another, but not the most current version, and I can go and log in to http://developer.apple.com/ and actually see stuff, but I don't see any way to update XCode, internally or through Software Update (you know, like every single other piece of Mac OS X software Apple sells), so I guess that means that I have to poke at their website in some way. HIG? What HIG?

        And when it comes down to it, I guess I don't really understand what was wrong with make(1). Okay, wait, rewind, scratch that: there're all sorts of things wrong with make(1), but at least it was clear how to make sure I had the "current" version of it.

        • nknight says:

          Current versions of Xcode are distributed through the Mac App Store and have been for a while now.

          • Not in a way that they're updated without my doing something. That'd be okay, if they didn't need to be updated to work with my phone... WHICH IS UPDATED AUTOMATICALLY.

            Pick a fucking distribution system. Or two. I'd be okay with two, so long as they actually communicate with each other.

            • nknight says:

              I'm honestly not sure what you're upset about. My iPhone and iPad don't update automatically, I have to go press a button in the settings menu to do it, just like you do with Xcode (go into the App Store, press the update button, wait 10 minutes, done).

        • jwz says:

          Yeah, it's a mess.

          1. No matter what version of Xcode you have, it doesn't update through Software Update. Either you have to log in and download a new version manually from the web site, or you have to download a new version manually through the hideous and flaky MacOS App Store program.
          2. It's always like 2+ GB.
          3. Sometimes they charge you for the N.M version while the N.(M-1) version is free. It is hard to predict when this is the case but it usually corresponds to small values of M and/or WWDC calendar dates.
          4. If you want to debug software on your phone, you need an Xcode that was released later than the OS that your phone has installed.
          5. You can never install software that you wrote on a phone that you own without paying apple $99 (per year).
          6. Taking it - but failing to like it - is considered a violation of your TOS.

          • That there is a steaming pile.

            I actually have other ("put my thing in the App Store you rapists") reasons to cough up $99 for a year or tow.

            But, um, wow. Really? Yes, I know: really.

          • nknight says:

            #2 is no longer true, Xcode has been updated exactly the same as every other app in the store for several months now. Last update for me was around 90MB.

            #3 is not random, it's linked to your OS version. Xcode releases correspond to particular OS X releases, and if you have the corresponding version of OS X, the update is free.

  29. A comment about OpenGL vs. OpenGL ES. Put basically, most of you are completely wrong, but that's okay, I understand why.

    The underlying architecture of graphics chips changed radically since OpenGL (and IrisGL, which it was based on) was introduced. Moreover, immediate mode was, and always has been an abomination. The underlying hardware never worked that way and trying to pretend it did was a performance nightmare. It was a trap that SGI frequently fell into itself, actually.

    I've been using GL off and on since the SGI days and I'm quite happy with OpenGL ES. It encapsulates the move to shader programming well and gets rid of the nonsense that you never should have been using in the first place.

    So sure, I can understand how that's confusing if you don't do a lot of graphics programming. Luckily, there are nice high level APIs you can use that hide all of this from you. Seriously, go use one of those.

    But for the love of all that's holy, don't make your own immediate mode wrapper and then complain about how a field you haven't been involved in for much of the past decade has moved on without you. I mean, wow. Porting a screen saver to a platform that doesn't even have screen savers? Actually no, that's just fine if you're trying to learn a new platform. It's fine. But don't go claiming everyone's doing it wrong when you're not even that aware of what "it" is.

    This is not profound, it's not all that clever. It's just a bit cringe worthy. Sorry, man.

    • JeffW says:

      JWZ writes software, and while hooking into some low level things, he generally writes fairly high level things.

      Writing high level applications, he shouldn't have to care that Gfx hardware has changed in the past 20 years. I mean, not any more then he should care that Xscreensaver was written on a 20MHz 68020, and now runs on things a bazillion times faster.

      CPU manufactures have chosen to implement some things in microcode, they have chosen to have different numbers of units that do different things - fixed precision math, FP math, ALU, etc, they have chosen to allow and actually run code out of order, and various other things which are dramatic changes from the state-of-the-world 20 years ago, but which have forced no changes on application developers. Your application might run faster after recompiling with a smarter compiler. Your application might run faster if you use threads. Your application will just run faster because despite going through a comparability library, a compatibility OS mode, using a deprecated CPU mode, translated through microcode, and run on an unpredictable execution unit, and doing so out of order, those translations are written by people who don't have their heads in their asses, and being actually executed on hardware of today.

      Microcode protect application developers from this. Drivers protect application developers from this. Compilers protect application developers from this. Libraries protect application developers from this.

      Arguing if OpenGL is a driver, library, api or a bit of all 3 is entirely irrelevant. Any one of those, or any combination of all 3 could take the responsibility of NOT FUCKING OVER application developers.

      • "JWZ writes software..."

        Well... no. Not really. He's basically a hobbyist now. He (and his fans here) are just checking in on more than a decade of hardware and software evolution. Evolution which has produced explosive growth in that industry, I might add.

        As for the CPU argument: So what? We're talking about the graphics subsystem here. What's good for one isn't necessarily good for the other. Yes, sometimes backward compatibility is good (see also: Windows on the PC), but sometimes it's an anchor around your neck (see also: Windows on mobile).

        The very success of OpenGL on mobile devices is due to the consortium stripping that bad boy down. That success apparently prompted a guy to dust off an ancient piece of software, port it and then bitch about having to port it.

        Mmm... irony.

        • JeffW says:

          That JWZ has provided the compatibility layer within a few days of work is proof that it is possible. That you may think that the old way is somehow impure is irrelevant, and that it might be slower then doing it the new way is also irrelevant.

          No one is disputing that the new way might be new, easier, and better, or that the new, easy, well defined better thing makes writing books, and attracting new developers easier. That isn't the question. Those are valuable things, and those have happened.

          The relevant question is: could the spec have included compatibility, at no cost in hardware, and no cost to software not using that layer? Yes.

          To what end? Saving 2 weeks of development effort?

          • Compatibility with the immediate mode way of thinking is pretty common. You'll find similar things in a lot of OpenGL based toolkits. I usually stick the word "Builder" in my objects when I'm doing that kind of thing.

            We'll try and make it more performant, though, so we don't typically follow the exact immediate mode pattern. It'll deviate in a way that resembles more what the caller is thinking about (e.g. polygons) and less the exact calls you'd make in that mode.

            And that's a fine place to put that, in a high level toolkit. It doesn't belong in the driver and it certainly doesn't belong in anything a chip designer should have to think about.

            As for your questions about compatibility with the older modes, it's a lot harder than that. And it's kind of moot. The "start it over" contingent won, these are the results, and wow did it ever work out well. OpenGL ES has taken off like I never would have believed.

            • JeffW says:

              It needs to be maintained wherever it is defined. You want to focus this on the language of "driver", "api", "framework". Whatever. OpenGL covers much of that stack.

              Now, granting that OpenGL covers a huge stack, it took on that huge stack, and it is thus responsible for maintaining that huge stack.

      • SenorBlanco says:

        If you want to get maximum performance from a modern CPU, you need to use vector instructions. And since vectorizing compilers suck, this means using a vectorized library (if it happens to do what you need), using intrinsics, or (horrors) inline assembly. In short, you have to rewrite your code. And if you want to take advantage of multicore CPUs (and what CPU isn't, these days?), you need to use threads. See above: you have to rewrite your code.

        So even in CPU-land, no, you are not protected from the technological advances of the last 20 years. Sorry.

        If your point is that "I don't care about performance, I just want my old shitty code to run" then sure, use a compatibility library for OpenGL 1.1 and keep your CPU code scalar and single-threaded. But don't try to hold back the industry for your use case.

  30. Cass Everitt says:

    The Regal project (http://github.com/p3/regal) is a user library that runs on top of all modern OpenGL variants (es 2, core, compatibility) and one of the key things it provides is support for the old deprecated APIs. So you can do immediate mode and fixed function on iOS and Android, but you also get universal support for modern APIs like DSA (direct state access).

    Making a thin driver interface that is a close match for the hardware is a good and important goal.

    Not breaking working code is also a good and important goal.

    There's no reason we can't have both.

    • PA says:

      "From an application developer's perspective, Regal just looks like an OpenGL implementation. You link with it instead of your platform's OpenGL library or framework, and that's really all you have to do to use Regal. The rest of your code can remain unchanged."

      Intersting, so this is the same as jwz's but more complete and libraried?

      So it would be interesting to "port" XScreenSaver to Regal and see how that goes.

      • Cass Everitt says:

        Yup, that port would be very interesting. The goal of Regal is that you shouldn't have to change the actual rendering code at all.

  31. Paul F says:

    This is awesome, thanks!

    It would be nice if someone had time to rewrite jwxyz.m to use OpenGL instead of Quartz 2D. I've found Quartz performance to be really bad on the iPhone, which is probably why SpeedMine is so slow.

    • jwz says:

      Not just the phone, Quartz 2D performance is crap on the desktop too. It's clearly not really optimized at all. Which is somewhat understandable: who uses Quartz for doing high performance graphics? It's just for drawing buttons, and as long as that gets done before the user blinks, you're fine. For everything that Quartz is used for in the real world, it's fast enough.

      I did consider retargeting jwxyz.c at GL instead of Quartz but I'm not sure it would help. For example, the performance is especially bad in Pong and other things that use the analogtv.c code (and other hacks that re-write the full framebuffer every frame, like Kumppa and Moire2 -- which, while they aren't the most interesting hacks in the world, do demonstrate the performance difference between jwxyz and real X11 quite ably!)

      So, as an experiment, I rewrote analogtv.c:analogtv_draw to call glTexImage2D every frame instead of XPutImage, and the performance was pretty much exactly the same.

      Right now the code is set up as, "I have a client-side full-screen array of brand-new pixels. Now I want them on the screen. Repeat at 30fps." There are surely more efficient ways that code like analogtv.c could be implemented in a GL world, but it's not clear to me how to do it without big structural changes to the caller. And, for better or worse, the whole mission of jwxyz.c is "no structural changes to the caller."

      • Paul F says:

        You're pretty much screwed on analogtv and other bitmap hacks, but I think it would definitely help to use GL for line- and polygon-oriented hacks, without having to rewrite the original code. Hopefully there aren't any cases where you need to use Quartz for some particular feature that OpenGL doesn't have.

  32. Nick Ghers says:

    Why don't many web repositories not have direct links to the latest version? For example, vim's URL will always contain the version in it. This makes it a pain in the ass, have parse some stupid index file (which's name keeps changing) to find out what's the latest file on the server. Vim isn't the only one.

  33. I sometimes read the comments here like one would gawk at a car crash. Watching the IQ-challenged and the humour-challenged attempt to reinterpret reality, over and over, gives me a sick glee.

    Then I remember, halfway through, that jwz is in the car crash. And I close my browser.

  34. jwz says:

    So, there's a thread about this post on Hacker News, and it's mostly a re-hash of all the stuff that has been said in the comments here -- in particular, it looks like at least 60% of the comments there are people trying to pretend that "OpenGL" and "OpenGL ES" have nothing to do with each other, which is easily debunkable nonsense. They say things like, "no working code was broken, because there was by definition no working OpenGL ES code before the OpenGL ES spec was written!" What kind of crazyland do these people come from?

    Anyway.

    I'll copy here a few of the comments from over there that I enjoyed:

    prodigal_erik

    You probably have enough horsepower at your disposal to emulate each and every computer you ever bought (simultaneously!) and run all that software forever. But instead we're going to require any tool you want to use to be rewritten half a dozen times over the course of your career alone. And why? Because fuck you, we just can't be bothered to start taking engineering seriously.

    potkor

    These "embedded systems with highly constrained resources" are machines with 512MB+ of memory and monster CPU/GPUs. It's perfectly OK to write code for them that you haven't bled over to optimize the hell out of.

    And JWZ just showed you don't need "tons of driver calls" unless you mean simple function calls that don't cross the kernel boundary.

    seclorum

    Its not absurd at all. He's got a massive collection of amazing GL-based screensavers that a LOT of people have learned graphics programming from, over the years. There are still contributions being made to this collection in 2012, and there have been consistent additions to the collection since the very early 90's. This is no toy collection.

    Fact is a lot of great OpenGL code could run on the iPad today, if only the false ideology of cutting 'archaic things' out of the ES profiles wasn't getting in the way. There are plenty of opportunities for OpenGL apps from decades ago to be re-targetted to the new platforms, if only for this problem - and jwz is right to point it out.

    malachismith

    You kind of miss the point. This is a philosophical argument ILLUSTRATED through OpenGL and a port. To quote, "thou shalt not break working code"

    astrodust

    jwz, if you've been following him, is inherently pragmatic. He's a follower of the philosophy that the computer, and by extension the frameworks and languages to program it, should be subservient to the programmer. They shouldn't tell you how to live your life or behave like a stubborn mule when, for whatever well intentioned reason, people decided to overhaul the spec everyone depended on.

    bigiain

    While that's all true - it completely ignores the externalities. It cost Jamie 3 or 4 attempts and eventually 3 days work to get his unbroken code running on the new OpenGL version. What's the multiplier needed to account for the cost this change incurred for all the other developers who wrote code using OpenGL before ES?

    Sure, maintaining backwards compatibility is costly for a project like OpenGL. But if you choose _not_ to maintain backwards compatibility, for whatever reason, and then someone shows that your reasoning is bogus by reimplementing the old API calls in 3 days, you should expect to get called "idiots". (And you then should either be sure enough in your convictions that you know jwz is wrong, or take it on the chin and say "Hey, we fucked _that_ one up. Mind if we include your code in our next release?")

    • cmccabe says:

      OpenGL and OpenGLES are more than just APIs. They are contracts between software developers and hardware manufacturers. Hardware developers like NVidia and Intel spend a lot of time and money optimizing their hardware to run OpenGL programs fast. If the standard is bloated with archaic and useless features, that means that companies are going to have to spend time optimizing and implementing those ancient and useless features rather than making things faster for the modern code.

      You might think that NVIDIA and its competitors could just add a small shim layer for the old APIs, and call it quits. But it doesn't work that way. Any feature you keep in the standard is an ongoing maintenance burden. People will want to know why $FOOCORP's graphics chipset is faster than your chipset on a given benchmark or video game. Telling them "it's because they used deprecated APIs, and we don't optimize for those," is not going to make the boss happy.

      Having ancient APIs hanging around make software developers' jobs more difficult as well. If you're writing a video game and you accidentally use a deprecated API which is emulated through some software shim, your code might go 100x slower in many cases. Tracking down this kind of performance bottleneck is not going to be easy. Meanwhile, there are still tons and tons of reference materials and tutorials about OpenGL that were written in the 1990s and tell you to use all of those deprecated APIs!

      The OpenGLES guys took advantage of the new up-and-coming mobile platforms to make a clean break with a lot of the legacy cruft. And frankly, we're better off for it. Our devices have better battery life, and our programs run faster because we slayed the legacy dragons.

      One final note. If it had been possible to implement the old immediate mode (glBegin, glEnd, etc) API efficiently, there would have been no need for a new API! The shims you came up with may handle some of the common cases, but they are by no means correct.

      • teapot says:

        You might think that NVIDIA and its competitors could just add a small shim layer for the old APIs, and call it quits. But it doesn't work that way. Any feature you keep in the standard is an ongoing maintenance burden. People will want to know why $FOOCORP's graphics chipset is faster than your chipset on a given benchmark or video game. Telling them "it's because they used deprecated APIs, and we don't optimize for those," is not going to make the boss happy.

        Not exactly.

        Companies have developers with different levels of abilities and experience. Good ones are usually busy doing something complex and clearly important. For some companies it's only most demanding parts of new development. For better ones, some decent developers are supposed to maintain existing products (but when anything goes wrong, they are whisked into the current development, leaving maintenance of current products to someone else). The idea of increasing the workload of those people scares the management because those people usually have so much work, and development schedule is so crazy, they are constantly keeping themselves right before the boundary where unsafe development practices start. If they tried to go any faster, the quality of their work would take a sharp nosedive.

        So any small and supposedly predictable project -- like, a compatibility layer for new products supporting old interface -- is given to someone else. Someone else may be a new inexperienced employee, may be a programmer recently moved from unrelated project, or he may be just a shit programmer that no one knows how to get rid of. He is in an unfamiliar territory, he wasn't around when people were working on a thing he is supposed to emulate, he is not involved with the current development, his background knowledge is likely insufficient and he didn't even get a chance to find and try to plug the holes in his education -- here I am being generous here assuming that he is capable and interested in doing any of that in the first place.

        As a result, the simplest project takes the same amount of time and effort, and produces the same amount of bugs, as the most complex one. Management, seeing that, decides that any checklist item "costs" the same. After all, you can't pay developer less than it takes to survive in the vicinity of the company's office, and even good developers rarely make demands that exceed double the salary of a minimally competent person, so as long as you have one developer per such mini-project, you can't make simple feature take less time or money than a complex one.

        With this sad experience, companies fight tooth and nail to reduce the requirements for what their products are supposed to do. Even if it's something blindingly obvious, like not breaking code written by everyone else over decades. And it inevitably spills into the standards.

        What we have witnessed is how this logic is demonstrated to be wrong -- a completely unrelated, but knowledgeable and experienced person looked at the things trying to apply them to his existing code, got surprised and disgusted by the situation, and wrote in a few days what a newbie would spend a year doing. The code does what it's supposed to do, it uses internal representation of the functions that newbie probably would not think of, it translates the interfaces, so old code works, and the problem is solved.

        If all those companies that whined to each other at the standards committess about difficulties of supporting features that take years to develop, instead agreed to slow down their development process by a few weeks so experienced developers can work on "less critical" things, they would get the same problem solved by the same kind of people that they already have.

        But then companies would have to admit that:

        1. Their developers are nowhere close to being even on comparable levels of abilities and experience, and that despite this they can't just get rid of bad and inexperienced ones. That's like corporate version of Tantalus punishment.
        2. Their internal development model jumps between "one-man per project", and "dysfunctional committee" with nothing in between.
        3. They are so afraid of each other getting a product faster than themselves to the market, all supposedly valuable developers are working at the breakneck pace, constantly on the brink of descending into chaos.
        4. They can't bring themselves to co-operation with each other even on a simple thing that any of them could write in a vendor-neutral way, and clearly will benefit all of them and a vast number of people who use their products.

        It's much easier to pretend that "all features are equal" and act like lazy, bratty kids, demanding that everyone else should throw away working code.

        • cmccabe says:

          That's a lot of words to say "NVIDIA and it's competitors aren't competent enough to write a good wrapper layer." But it's just not true. I know it's not true because all of the companies I know that sell OpenGLES-capable chipsets also sell OpenGL-capable chipsets. Do you think they left out the compatibility layer for the GLES parts because all the "good" programmers work on the GL team and not the GLES team? Or because they're "lazy, bratty kids" (your words?)

          They left out the compability layer because they didn't think there would be that much code reuse between desktop software and mobile software. And they realized that supporting the legacy features would have a high cost in terms of increased power consumption, a buggier product, and worse peformance. And frankly, their decision was the right one. There really hasn't been that much code reuse between desktop and mobile. OpenGLES is doing fine in the marketplace without the legacy baggage. Programmers continue to get paid to port or rewrite whatever code needs to be ported or rewritten for mobile. It's good for the environment (because of the reduced power consumption), it's good for the economy-- I don't see what's not to like here.

          • jwz says:

            There really hasn't been that much code reuse between desktop and mobile.

            You know, I have a theory about why that is. I think by now all of you can probably guess it. It has something to do with OpenGL versus OpenGLES, and AppKit versus UIKit, and...

          • teapot says:

            That's a lot of words to say "NVIDIA and it's competitors aren't competent enough to write a good wrapper layer.

            No, that's an explanation why such decisions are made even while companies have competent people who would easily implement such a layer if they were asked, or even allowed, to work on that part of the project.

            Do you think they left out the compatibility layer for the GLES parts because all the "good" programmers work on the GL team and not the GLES team?

            No, it's because all good programmers are working deep in the guts of their implementations.

            Or because they're "lazy, bratty kids" (your words?)

            That describes, though non-specifically, most of their behavior.

  35. sean b says:

    glPolygonMode doesn't create hidden-surface wireframes, it just draws the edges of polygons as lines, so as long as GLES has GL_LINES rendering, you can still simulate this with vertex buffers.

    (To do hidden-surface wireframe in regular GL, people already had to be rendering the surfaces in black, then drawing the wireframe again, using glDepthOffset to avoid z-fighting. Dunno if GLES gives you glPolygonOffset, though. If not, you can partially implement it with projection matrix tweaks, but only the fixed offset, not the slope-based offset.)

    But yeah, this whole thing is terrible. I have a lot of game industry programmers following me on twitter, and some of them totally agree there should have been a glu-style replacement for this stuff, and some of them are insistent that no, that's terrible for the driver writers blah blah (despite the whole arguiment being that it didn't have to be in the driver, but twitter is terrible for communication).

    And so you're not the only one doing this independently. There's regal, linked above. And I have a partially-implemented "high performance" GL immediate mode wrapper (before I got bored since I'm not actually developing on any platforms that don't have immediate mode).

    (It works by making all the glTexCoord and glVertex calls function pointers; the first time you hit one in glBegin, it decides what format that part of the vertex data is going to be (so that if you call the function again it's basically a 1:1 copy), then replaces the function pointer for itself with one that optimally just stores that data. If you call both (say) glColor3f and glColor4ub inside a single glBegin/End, the later ones go through a slow path since they do format conversions. Then glEnd resets all the function pointers. Also, the optimal functions can't actually store the data straight into a vertex buffer, they still have to store it to a state buffer to be able to handle the things where e.g. people don't set glColor on every vertex, but only sparsely (e.g. once per triangle/quad). I didn't do stuff like display lists etc, but you wouldn't even need more function-pointer-implementations, since you'd just need to copy the vertex buffer data to somewhere else. But (a) it's not clear the performance gain is that significant, and (b) I doubt for xscreensaver you care about the performance anyway.)

    • jwz says:

      Right, I was wrong about that -- PolygonMode LINES doesn't update the depth buffer, but it does do back-face culling, which is what makes it not totally trivial to replace it with LINE_STRIP or something.

      I'm told that while PolygonOffset does exist in GLES, it turns out that Apple doesn't implement it, so there's that, too.

  36. The easy way to get an image from the Photo Gallery is UIImagePicker, configured to read from the library. The UI isn't as pretty as ALAssetsLibrary, but UIImagePicker is much easier to use.

  37. jwz says:

    A nice, long post from William Edwards: If you defend those involved in the OpenGL ES specification, you are an idiot.

    There is only one reason that these industry committees threw out OpenGL backwards compatibility: the interests of the committee member companies.

    Surprise surprise the committee members are overwhelmingly representing the interests of the GPU makers. Their mission is to subtly improve their position with regards the competition they are sitting around a table with e.g. by pushing things slightly in a direction that their hardware does well and pushing away from - by deprecation, omission - those areas that would require investment.

    OpenGL ES was hatched to ensure that the devices could be labeled “OpenGL” without those mobile phone GPU companies having to invest engineering time in making it so!

    And the motivation was money, not technical purity nor vision.

    And the kind of people who get sent to committees are often not the pragmatic get-things-done type of people. Companies quickly build “system architecture” teams to put the dangerous architecture astronauts as far away from the get-things-done teams as possible, and give them tasks like sitting on standards committees to occupy them.

  38. matt says:

    I came, I saw, I downloaded the code, I built, I installed on my iPhone. Thanks for making this so easy to do!

    (I did hit the issue with the sonar module, but I just followed the instructions provided in the error message and everything was fine after that.)

    • jwz says:

      Awesome! Let me know if you find any bugs. I think there are only like 3 people besides me who have ever run this so far.

      • matt says:

        Did have one crash, but I don't regard that as important.

        A possible design consideration is that the iPad version runs so slowly, for many of the savers, as to be unusable on my iPad 3; I'm wondering whether those savers are overwhelmed by the number of pixels of the Retina display, in which case it might help to pretend there are 1/4 as many. Although there are physically 2048-by-1536 = 3145728 pixels, iPad apps generally act as if this were a 1024-by-768 screen = 786432. This makes text and geometrical things drawn come out at the right size, though when asked to supply an image to be drawn they may supply it at double-resolution along with a hint to the system that this is double-resolution.

        • jwz says:

          Which savers seem overly slow on the retina iPads? Do they tend to be the GL ones or the X11 ones? Are they the same ones that are kinda slow on MacOS?

          • matt says:

            Well, just to begin at the beginning, for Abstractile I get a frame rate of 1.6 fps on the iPad 3. On the iPhone 4S with the same settings it's 30 fps.

            • jwz says:

              Well that doesn't make any damned sense at all, because abstractile is only touching a thousand pixels a frame; it's not re-writing the whole frame buffer each time.

              • matt says:

                The pixels hypothesis was just a wild-guess hypothesis and could certainly make no sense, but the phenomenon is objectively real: a lot of these savers are maxing out at less than 2 fps on the iPad, but they run great on the iPhone. My solution: enjoy them on the iPhone!

                • jwz says:

                  It would help a lot if you narrowed down which ones are slow. An informative set of tests might be: dangerball, julia, bouncingcow, attraction, spotlight.

                  • matt says:

                    Dangerball: works great. High frame rate, low load.

                    Julia: 1.7 FPS, near 100% load.

                    Bouncingcow: works great. High frame rate, low load.

                    Attraction: 1.7 FPS, near 100% load.

                    Spotlight: 1.6 FPS, movement very jerky (big jumps), near 100% load.

  39. > There's no glPolygonMode with GL_LINE, so I don't see an easy way to implement wireframe objects with hidden surface removal. Maybe rendering them twice with glPolygonOffset?

    You can use a fragment shader with a texture coordinate that using dFdx/dFdy determines if it's the last pixel in either direction to be rendered - and if so, render it, otherwise discard.

    • jwz says:

      Can you use fragment shaders in GLES 1? I thought the Shader Language was a GLES 2 thing, and you were never allowed to mix-and-match GLES 1 and GLES 2 in the same app. All of my code is targeted at GLES 1, because it still contains the classic lighting and matrix model.

      • I see the issue there. I forgot again that GLES 1 doesn't have shaders... which IMO is the most stupid part of it. OpenGL was finally re-engineered to allow programmable cores and then they go to remove just that.

        In limited fixed-function I see hardly any way to do that... save for a manually-mipmapped texture with just a colored border - but that may not even be possible. If you do the mipmapping yourself you can make the border 1px wide everywhere. Add GL_LINEAR filtering and you've got a border.