DM-SV01. Desempenho e eficiência para sua Infraestrutura.
DM-SV01
Servidor OCP

The DM-SV01 is a dual processor server featuring AMD EPYC™ processors. With options scaling from 8 to 64 cores per processor, the AMD EPYC ™ are the most advanced processors on the market. They are manufactured using 7nm process technology, which makes them more energy efficient.

The system is hot-pluggable and front side serviceable, avoiding work on the hot aisle. The form factor is based on the Open Compute Project, allowing three nodes in 2OU height of OCP racks. As an option for 19” racks, the DM-SV01 may be installed into a DM-SV Chassis 1904 shelf which holds 4 servers at a height of 4.5U.

Características

Memory

  • Each processor has 8 DIMM memory slots up to 3200M T/s, allowing a maximum capacity of 4TB per server.
  • Each DIMM memory slot has its own memory channel controller.
  • Processor P0 also supports NVDIMM modules on four DIMM slots.

Storage

  • One M.2 (up to 4TB) NVMe disk on board
  • One module supporting up to 4 hot swappable E1.S NVMe SSDs situated on the front right side of the server, with capacity of 4TB per disc. The module can be optionally populated with M.2 NVMe SSDs without hot swap support.
  • PCIe x8 card supporting two hot swappable E1.S NVMe SSDs. Up to 3 cards can be installed on the server using a riser card with three x8 FHHL slots.
  • PCIe x16 card with four M.2 sockets. The M.2 NVMe SSDs (up to 4TB) are not hot swappable.

Expansion slots

  • For expansion slots, one of the two options below can be selected:
    • Riser card with one PCIe x16 FHHL (Full Height Half Length) slot and one PCIe x8 FHHL slot.
    • Riser card with three PCIe x8 FHHL slots. The lower slot accepts only Datacom PCIe x8 cards for two E1.S NVMe SSDs.

Network

  • One OCP 2.0 Mezzanine card for NIC PCIe x16 slot. It accepts network interface cards with up to 100Gbit/s QSFP ports.
  • Additional network cards in PCIe format can also be installed in the expansion slots
  • SFP+ and QSFP ports can be connected inside the rack with copper cables, without requiring optical modules. That reduces cost and avoids power consumption of the optical modules.

BMC Management

  • The system is managed by a Board Management Controller (BMC). It can be connected to the data center management network via a Gigabit Ethernet port on the front panel or via NC-SI over OCP Mezzanine NIC to enable out-of-band systems management.
  • BMC software: OpenBMC with auditable and modifiable code. Redfish support (without IPMI).

Interfaces

  • The system can be connected to an external shelf of GPUs via PCIe retimer cards.
  • The system can be connected to an external JBOD storage via PCIe SAS HBA cards.
  • The system can be connected to an external JBOF via PCIe retimer cards.
  • Front panel: two USB 3.1 ports connected to Processor 0, Gigabit Ethernet management port, VGA port, OCP 2.0 Mezzanine NIC, monitoring Leds and Pwr, Rst and UID buttons.

Technical specifications

  • Power supply: 12Vdc supplied by OCP rack or supplied by DM -SV Chassis 1904 (Datacom shelf for 19” racks installation). Consumption up to 750W, depending on processors and peripherals installed.
  • Temperature: The use of highly efficient heat sinks and two efficient 80mm fans allows operation from 0°C to 40°C (sea level). Operation with E1.S SSDs at full R/W rate is limited to 35°C.
  • Dimensions of the sled: 89 x 174 x 724mm.