A bit old but still interesting

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    4 days ago

    I find this paper false/misleading.

    They presented their methodology in an open and clear way and provide their data for everyone to interpret. You can disagree with conclusions but it’s pretty harsh to say it’s “misleading” simply because you don’t like the results.

    They just translated one algorithm in many languages, without using the language constructs or specificities to make the algorithm decent performant wise.

    They used two datasets, if you read the paper… It wasn’t “one algorithm” it was several from publicly available implementations of those algorithms. They chose an “optimized” set of algorithms from “The Computer Language Benchmarks Game” to produce results for well-optimized code in each language. They then used implementations of various algorithms from Rosetta Code which contained more… typical implementations that don’t have a heavy focus on performance.

    In fact - using “typical language constructs or specificities” hurt the Java language implementations since List is slower than using arrays. It performed much better (surprisingly well actually) in the optimized tests than in the Rosetta Code tests.

    • FizzyOrange@programming.dev
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      4 days ago

      They chose an “optimized” set of algorithms from “The Computer Language Benchmarks Game” to produce results for well-optimized code in each language.

      Honestly that’s all you need to know to throw this paper away.

        • FizzyOrange@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          3 days ago

          It’s a very heavily gamed benchmark. The most frequent issues I’ve seen are:

          • Different uses of multi-threading - some submissions use it, some don’t.
          • Different algorithms for the same problem.
          • Calling into C libraries to do the actual work. Lots of the Python submissions do this.

          They’ve finally started labelling stupid submissions with “contentious” labels at least, but not when this study was done.

            • FizzyOrange@programming.dev
              link
              fedilink
              arrow-up
              1
              ·
              2 days ago

              I agree, but if you take away the hard numbers from this (which you should) all you’re left with is what we all already knew from experience: fast languages are more energy efficient, C, Rust, Go, Java etc. are fast; Python, Ruby etc. are super slow.

              It doesn’t add anything at all.

              • atzanteol@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                2 days ago

                Well… No. You’re reading the title. Read the document.

                “We all know” is the gateway to ignorance. You need to test common knowledge to see if it’s really true. Just assuming it is isn’t knowledge, it’s guessing.

                Second - it’s not always true:

                for the fasta benchmark, Fortran is the second most energy efficient language, but falls off 6 positions down if ordered by execution time.

                Thirdly - they also did testing of memory usage to see if it was involved in energy usage.