• commander@lemmy.world
    link
    fedilink
    English
    arrow-up
    60
    ·
    4 days ago

    I’m sure there are data science/center people that can appreciate this. For me all I’m thinking is how hot it runs and how much I wish soon 20TB SSDs would be priced like HDDs

    • Justin@lemmy.jlh.name
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      2
      ·
      4 days ago

      nah datacenters care more about capacity or iops, throughput is meaningless, since you’ll always be bottlenecked by network

      • randombullet@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        A lot are moving through software defined networking which runs at RAM speeds.

        But typically responsiveness is quite important in a virtualized environment.

        InfiniBand could run theoretically at 2400gbps which is 300GB/s.

      • aleq@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        4 days ago

        Not necessarily if you run workloads within the datacenter? Surely that’s not that rare, even if they’re mostly for hosting web services.

        • Justin@lemmy.jlh.name
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          4 days ago

          Yeah but 15 GB/s is 120 gbit. Your storage nodes are going to need more than 2x800gbit if you want to take advantage of the bandwidth once you start putting more than 14 drives in. Also, those 14 drives probably won’t have more than 30M iops. Your typical 2U storage node is going to have something like 24 drives, so you’ll probably be bottlenecked by bandwidth or iops no matter if you put in 15GB/s drives or 7GB/s drives.

          Maybe it makes sense these days, I haven’t seen any big storage servers myself, I’m usually working with cloud or lab environments.

          • Aceticon@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            4 days ago

            If what you’re doing is database queries on large datasets, the network speed is not even close to the bottleneck unless you have a really dumbly partitioned cluster (in which case you need to fire your systems designer and your DBA).

            There are more kinds of loads than just serving static data over a network.

        • Albbi@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 days ago

          I work in bioinformatics. The faster the hard drive the better! Some of my recent jobs were running some poorly optimized code and would turn 1tb of data into 10tb of output. So painful to run with 36 replicates.

    • kkj@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 days ago

      Agreed. I’d happily settle for 1GB/s, maybe even less, if I could get the random seek times, power usage, durability, and density of SSDs without paying through the nose.

      • commander@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        I’d be more than happy with 1GB/s drives for storage. I’d be happy with SATA3 SSD speeds. I’d be happy if they were still sized like a 2.5" drive. USB4 ports go up to 80Gb/s. I’d be happy with an external drive bay with each slot doing 1 GB/s