2.10 Performance

I’ve been reading up on all the great new stuff in Scala 2.10.0-RC3, and as always, one of the features mentioned is performance improvements. I was curious today what that meant for functional style work, so I wrote up a short microbenchmark to test it. I created a Sieve of Eratosthenes using streams, and a Sieve of Sundaram, both shown below:
Sieve of Eratosthenes –

def eratosthenes(toNum: Int) = {
  def sieveHelp(r: IndexedSeq[Int]): Stream[Int] = {
      r.head #:: sieveHelp(r.tail.filterNot(_ % r.head == 0))
  sieveHelp(2 +: (3 to toNum by 2))
Sieve of Sundaram –
def sundaram(toNum: Int) = {
  val n = (toNum - 2)/2
  val nonPrimes = for (i <- 1 to n; j <- i to (n - i) / (2 * i + 1)) yield i+j+(2*i*j)
  2 +:((1 to n) diff nonPrimes map (2*_+1))

The Sieve of Sundaram is run 120 times, finding all primes below 3,000,000. The Sieve of Eratosthenes is run 60 times and finds all primes below 75,000. The results for both are in the chart below:

Bench results

As you can see, the performance improvement for Sundaram are negligible, but the the performance of the Sieve of Eratosthenes is more than doubled! All in all, I’m looking forward to the release of 2.10 and further upgrades to the speed of the fastest and most flexible non-java jvm language.

For the source of my benchmark, go here: https://github.com/markehammons/2.10.0-RC3-Benchmark



Filed under Uncategorized

9 responses to “2.10 Performance

  1. Pingback: Scala 2.9 vs 2.10 Performance | My Daily Feeds

  2. It would be great if you compared it to writing in imperative style either in Scala or Java as well just to understand what the trade off is in this case.

  3. Tom Smallcock

    Why do we always see such useless benchmarks like these coming out of the JVM (i.e. Java, Scala, Clojure and JRuby) communities?

    This is not reflective of the performance of real-world applications in any way.

    Why even bother doing something like this?

    • I did this to see if there is any actual changes in list and stream performance between 2.9.2 and 2.10.0. If you have an idea for some better microbenchmarks or benchmarks, feel free to suggest them.

  4. Could anyone illuminate *why* it’s faster? Is it better at compiling for comprehensions in 2.10?

    • I believe it may have something to do with inlining improvements in 2.10, but I’m not sure. IIRC, streams are a bunch of delayed function calls wrapped in a nice abstraction.

  5. csenol

    Some more experiments with your benchmark. It is obviously Inlining

  6. isn’t it this two algorithms are used to generate prime factors?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s