Sheeple 3.0 is finally out!
After months in development, it’s finally done. A major rewrite of the entire Sheeple system, this new version boasts a completely new, cleaned up, and serious’d interface, reworked, lightweight objects, and efficient property access (right now, matching CCL’s slot-value performance.) You can download a tarball here, or use asdf-install to grab the latest version.
I’m really excited about this release. I would write more about it and all the goodies it has, but I have a presentation to finish. Expect more in a later post.
Okay, so I haven’t actually posted anything about Sheeple in a while.
Sheeple took a break for a few weeks, but development is up to full steam again. There’s some shiny new special sauce going on under the hood now, with some -very- promising early results.
See this paste for some picobenchmarks on various implementation. Notice: direct-property access is now as fast as slot-value on CCL. :)
Unfortunately, the rest of Sheeple doesn’t perform that well quite yet. The new fancy secret sauce allows for a lot of optimizations that weren’t done before. Once this code is stabilized and tagged as 3.0(!), I’m bringing back a bunch of different caching schemes Sheeple was using for dispatch. Tonight’s benchmarks were promising — I think Sheeple may actually be able to perform better and faster than optimized CLOS implementations such as CCL’s and SBCL’s.
We’ll see. We’ll see…
For the longest time, I wanted to start learning about issues related to concurrency, and how to handle them in Lisp. I kept reading around to learn about different approaches to writing concurrent code, and studied up on things like Software Transactional Memory, futures/promises, and Erlang-style actors. After boggling to understand all these different approaches, I was left with a bit of disappointment: None of them seemed to be the right combination of generalized+clean+easy.
STM caught my eye for a while, but it still has its issues, and I’m still not sure how I feel about the whole “thrash until you can agree on something” issue — it seems to me like it’s just the same as wrapping code in (with-lock-held …). Futures/promises looked easy enough for simple one-shot tasks, but that’s not necessarily what you want to do when you write heavily-parallel code: Sometimes you want threads constantly yielding values. Erlang-style actors are, of course, one of the big success stories when it comes to people trying to write heavily-parallel code. It just seems so damn ugly — and it -is- nice to be able to share data sometimes.
Then, magic happened: I came across something called Communicating Sequential Processes, through Rob Pike’s Google Tech Talk on Newsqueak. It was really amazing, it seemed to be just what I needed: a relatively low-level synchronisation mechanism that works naturally with the concept of multiple parallel threads/processes. I’m totally sold on Rob’s point of view here: don’t try to pretend you’re not parallelizing code (hear that, STM? That’s right). Instead, make the parallelization part of your interface. Deadlocks are a bug — find them.