Supports 1x double slot GPU card, 4th Gen Intel Xeon Scalable processor, dual 1Gb/s LAN ports, redundant power supply, 4 x 3.5" NVMe/SATA hot-swappable bays
Ideal for virtualisation, cloud computing, enterprise server. 2x PCI-E 4.0 x16 slots. Intel® Ethernet Controller X550 2x 10GbE RJ45. Redundant power supplies.
Supports 1x double slot GPU card, 4th Gen Intel Xeon Scalable processor, dual 1Gb/s LAN ports, redundant power supply, 10 x 2.5" NVMe/SATA hot-swappable bays
Supports 2x double slot GPU cards, 4th Gen Intel Xeon Scalable processor, dual 1Gb/s LAN ports, redundant power supply, 8 x 3.5" NVMe/SATA hot-swappable bays
Edge Server – 1U 3rd Gen. Intel Xeon Scalable GPU server system, ideal for AI & Edge applications.
4th Gen Intel Xeon Scalable processor, single 1Gb/s LAN port, redundant power supply, 2 x 2.5" NVMe/SATA hot-swappable bays
Supports up to 3 x double slot Gen5 GPU cards, single 1Gb/s LAN port, redundant power supply, 12 x 3.5/2.5" SATA/SAS hot-swappable bays, 4th Gen Intel Xeon Scalable processor
Up to 4 x NVIDIA ® PCIe Gen4 GPU cards. NVIDIA-Certified system for scalability, functionality, security, and performance. Dedicated management port. Redundant power.
Short Depth Single AMD EPYC 9004 Series Edge Server with 2x GPU Slots, 2x 2.5" Gen4 NVMe/ SATA Hot-Swappable bays
Single AMD EPYC 9004 Series, Supports up to 2x FHFL PCIe Gen5 x16 slots - 12x 3.5" NVMe / SATA Drives.
Supports 3x double slot GPU cards, dual 1Gb/s LAN ports, 5x PCIe Gen4 x16 slots, redundant power supply.
Dual 4th Gen Intel Xeon Scalable Gen4 Processor, GPU Computing Pedestal Supercomputer Server, 4x Tesla, RTX GPU Cards
2U GPU server powered by dual-socket 3rd Gen Intel Xeon Scalable processors that supports up to 16 DIMM, four dual-slot GPU, 2 M.2, four NVMe (by SKU), total eleven PCIe 4.0 slots
8x PCIe Gen4 expansion slots for GPUs, 2 x 10Gb/s SFP+ LAN ports (Mellanox® ConnectX-4 Lx controller), 2 x M.2 with PCIe Gen3 x4/x2 interface
GPU server optimised for HPC, Scientific Virtualisation and AI. Powered by 3rd Gen Intel Xeon Scalable processors. 6x PCIe Gen 4.0 x16, 1x M.2
Ideal for scientific virtualisation and HPC. 6x PCI-E 4.0 x16 slots. 2x M.2 NVMe or SATA supported. Redundant power supplies.
8x PCIe Gen3 expansion slots for GPUs, 2x 10Gb/s BASE-T LAN ports (Intel® X550-AT2 controller), 4x NVMe and 4x SATA/SAS 2.5" hot-swappable HDD/SSD bays
2U dual-socket GPU server powered 3rd Gen Intel Xeon Scalable processors that supports up to 16 DIMM, four dual-slot GPU, 4 M.2, eight NVMe (by SKU), total eleven PCIe 4.0 slots.
GPU Server - 2U 8 x GPU Server | Application: AI , AI Training , AI Inference , Visual Computing & HPC. Dual 10Gb/s BASE-T LAN ports.
Up to 8x PCIe Gen4 GPGPU cards, dual 10Gb/s LAN ports, redundant power option.
Up to 8 x PCIe Gen4 GPGPU cards, 2 x 10Gb/s BASE-T LAN ports (Intel® X550-AT2), 8-Channel RDIMM/LRDIMM DDR4 per processor, 32 x DIMMs
8 PCI-E 4.0 x16 + 3 PCI-E 4.0 x8 slots, Up to 24 Hot-swap 2.5" drive bays, 2 GbE LAN ports (rear)
Supports up to 8 x double slot Gen4 GPU cards, dual 10Gb/s BASE-T LAN ports, redundant power supply, 8 x 2.5" NVMe/SATA hot-swappable bays. Built for AI & HPC
10 x FHFL Gen3 expansion slots for GPU cards, 2 x 10Gb/s BASE-T LAN ports (Intel® X550-AT2), 8 x 2.5" NVMe, 2 x SATA/SAS 2.5" hot-swappable HDD/SSD bays, 12 x 3.5" SATA/SAS hot-swappable HDD/SSD bays
Dual AMD EPYC 9004 Series 8x GPU Server - 24x 2.5" NVMe / SATA / SAS + 4x NVMe Dedicated Drives
Up to 10x PCIe Gen4 GPGPU cards, dual 10Gb/s BASE-T LAN, redundant power supply.
Supports 10x double slot GPU cards, redundant power supply, 12 x 3.5/2.5" NVMe/SATA hot-swappable bays. Built for AI & HPC
Supports 10x double slot GPU cards, dual 10Gb/s BASE-T LAN ports, redundant power supply, 12 x 3.5/2.5" NVMe/SATA hot-swappable bays. Built for AI & HPC
High Density 2U System with NVIDIA® HGX™ A100 4-GPU, Direct connect PCI-E Gen4 Platform with NVIDIA® NVLink™, IPMI 2.0 + KVM with dedicated 10G LAN
8x NVIDIA A100 Gen4, 6x NVLink Switch Fabric, 2x M.2 on board and 4 Hybrid SATA/Nvme, 8x PCIe x16 Gen4 Slots
Supports 4x SXM5 GPU Modules, dual 10Gb/s BASE-T LAN ports, redundant power supply, 8 x 2.5" NVMe/SATA hot-swappable bays. Built for AI & HPC
Supports 8x HGX H100 GPUs, dual 10Gb/s BASE-T LAN ports, redundant power supply, 16 x 2.5" NVMe, 8x SATA hot-swappable bays. Built for AI Training and Inferencing.
Supports 8x HGX H100 GPUs, Dual AMD EPYC 9004 Series 8x GPU Server - 16x 2.5" NVMe + 8x SATA Drives Hot-Swappable bays. Built for AI Training and Inferencing.
NVIDIA DGX A100 with 8x NVIDIA A100 80GB Tensor Core GPUs, Dual AMD Rome 7742 Processors, 2TB Memory, 2x 1.92TB NVMe M.2 & 8x 3.84TB NVMe U.2.
NVIDIA DGX H100 with 8x NVIDIA H100 Tensor Core GPUs, Dual Intel® Xeon® Platinum 8480C Processors, 2TB Memory, 2x 1.92TB NVMe M.2 & 8x 3.84TB NVMe U.2.
NVIDIA Titan RTX | NVIDIA T4 | NVIDIA A100 | NVIDIA L4 | NVIDIA H100 | |
---|---|---|---|---|---|
Architecture | Turing | Turing | Ampere | Ada Lovelace | Hopper |
SMs | 72 | 72 | 108 | 60 | 114 |
CUDA Cores | 4,608 | 2,560 | 6,912 | 7,424 | 18,432 |
Tensor Cores | 576 | 320 | 432 | 240 | 640 |
Frequency | 1,350 MHz | 1,590 MHz | 1,095 Mhz | 795 Mhz | 1,590 MHz |
TFLOPs (double) | 8.1 | 65 | 9.7 | 489.6 (1:64) | 25.61 (1:2) |
TFLOPs (single) | 16.3 | 8.1 | 19.5 | 31.33 (1:1) | 10.6 |
TFLOPs (half/Tensor) | 130 | 65.13 TFLOPS(8:1) | 624 | 31.33 (1:1) | 204.9 (4:1) |
Cache | 6 MB | 4 MB | 40 MB | 48 MB | 50 MB |
Max. Memory | 24 GB | 16 GB | 40 GB | 24 GB | 80 GB |
Memory B/W | 672 GB/s | 350 GB/s | 1,555 GB/s | 300 GB/s | 2,000 Gb/s |
The fastest and highest performance PC graphics card created, the NVIDIA Titan RTX is powered by Turing architecture and delivers 130 Tensor TFLOPs of performance, 576 tensor cores and 24GB of super-fast GDDR6 memory to your PC. The Titan RTX powers machine learning, AI and creative workflows.
It is hard to find a better option for dealing with computationally intense workloads than the Titan RTX. Created to dominate in even the most demanding of situations, it brings ultimate speed to your data centre. The Titan RTX is built on NVIDIA's Turing GPU Architecture. It includes the very latest Tensor Core and RT Core technology and is also supported by NVIDIA drivers and SDKs. This enables you to work faster and leads to improved results.
AI models can be trained significantly faster with 576 NVIDIA Turing mixed-precision Tensor Cores providing 130 TLOPS of AI performance. This card works well with all the best-known deep learning frameworks, is compatible with NVIDIA GPU Cloud and is supported by NVIDIA's CUDA-X AI SDK.
It allows for application acceleration, working significantly faster with 4609 NVIDIA Turing CUDA cores accelerating end-to-end data science workflows. With 24 GB GDD44 memory you can process gargantuan sets of data.
The Titan RTX reaches a level of performance far beyond its predecessors. Built with multi-precision Turing Tensor Cores, Titan RTX provides breakthrough performance from FP32, FP16, INT8 and INT4, making quicker training and inferencing of neural networks possible.
NVIDIA Tesla T4 GPUs power the planets most reliable mainstream servers. They can fit easily into standard data centre infrastructures. Designed into a low-profile, 70-watt package, T4 is powered by NVIDIA Turing Tensor Cores, supplying innovative multi-precision performance to accelerate a vast range of modern applications.
It is almost certain that we are heading towards a future where each of your customer interactions, every one of your products and services will be influenced and enhanced by Artificial Intelligence. AI is going to become the driving force behind all future business, and whoever adapts first to this change is going to hold the key to business success in the long term.
The NVIDIA T4 GPU allows you to cost-effectively scale artificial intelligence-based services. It accelerates diverse cloud workloads, including high-performance computing, data analytics, deep learning training and inference, graphics and machine learning. T4 features multi-precision Turing Tensor Cores and new RT Cores. It is based on NVIDIA Turing architecture and comes in a very energy efficient small PCIe form factor. T4 delivers ground-breaking performance at scale.
T4 harnesses revolutionary Turing Tensor Core technology featuring multi-precision computing to deal with diverse workloads. The T4 is capable of reaching blazing fast speeds.
User engagement will be a vital component of successful AI implementation, with responsiveness being one of the main keys. This will be especially apparent in services such as visual search, conversational AI and recommended systems. Over time as models continue to advance and increase in complexity, ever growing compute capability will be required. T4 provides up to massively improved throughput, allowing for more requests to be served in real time.
The medium of online video is quite possibly the number one way of delivering information in the modern age. As we move forward into the future, the volume of online videos will only continue to grow exponentially. Simultaneously, the demand for answers to how to efficiently search and gain insights from video continues to grow.
T4 provides ground-breaking performance for AI video applications, featuring dedicated hardware transcoding engines which deliver 2x the decoding performance possible with previous-generation GPUs. T4 is able to decode up to nearly 40 full high definition video streams, making it simple to integrate scalable deep learning into video pipelines to provide inventive, smart video services.
The NVIDIA A100 GPU provides unmatched acceleration at every scale for data analytics, AI and high-performance computing to attack the very toughest computing challenges. An A100 can efficiently and effectively scale to thousands of GPUs. With NVIDIA Multi-Instance GPU (MIG) technology, it can be partitioned into 7 GPU instances, accelerating workloads of every size.
The NVIDIA A100 introduces double-precision Tensor Cores, delivering the biggest milestone since double-precision computing was introduced in GPUs. The speed boost this offers can be immense, with a 10-hour double precision simulation running on NVIDIA V100 Tensor Core GPUs being cut down to only 4 hours when run on A100s. High performance applications are also able to leverage TF32 precision in A100s Tensor Cores to reach up to a 10x increased throughput for single-precision dense matrix multiply operations.
In modern data centres it is vital to be able to visualise, analysis and transform huge datasets into insights. However, scale-out solutions quite often end up being bogged down as datasets end up spread across many servers. Servers powered by the A100 deliver the necessary compute power, as well as 1.6TB/sec of memory bandwidth and huge scalability.
The NVIDIA A100 with MIG maximises GPU-accelerated infrastructure utilisation in a way never seen before. With MIG, an A100 GPU can be partitioned into up to 7 independent instances. This can give a multitude of users access to GPU acceleration for their applications and projects.
The NVIDIA L4 Tensor Core GPU, built on the NVIDIA Ada Lovelace architecture, offers versatile and power-efficient acceleration across a wide range of applications, including video processing, AI, visual computing, graphics, virtualisation, and more. Available in a compact low-profile design, the L4 provides a cost-effective and energy-efficient solution, ensuring high throughput and minimal latency in servers spanning from edge devices to data centers and the cloud.
The NVIDIA L4 is an integral part of the NVIDIA data center platform. Engineered to support a wide range of applications such as AI, video processing, virtual workstations, graphics rendering, simulations, data science, and data analytics, this platform enhances the performance of more than 3,000 applications. It is accessible across various environments, spanning from data centers to edge computing to the cloud, offering substantial performance improvements and energy-efficient capabilities.
As AI and video technologies become more widespread, there's a growing need for efficient and affordable computing. NVIDIA L4 Tensor Core GPUs offer a substantial boost in AI video performance, up to 120 times better, resulting in a remarkable 99 percent improvement in energy efficiency and lower overall ownership costs when compared to traditional CPU-based systems. This enables businesses to reduce their server space requirements and significantly decrease their environmental impact, all while expanding their data centers to serve more users. Switching from CPUs to NVIDIA L4 GPUs in a 2-megawatt data center can save enough energy to power over 2,000 homes for a year or offset the carbon emissions equivalent to planting 172,000 trees over a decade.
As AI becomes commonplace in enterprises, organizations need comprehensive AI-ready infrastructure to prepare for the future. NVIDIA AI Enterprise is a complete cloud-native package of AI and data analytics software, designed to empower all organizations in excelling at AI. It's certified for deployment across various environments, including enterprise data centers and the cloud, and includes global enterprise support to ensure successful AI projects.
NVIDIA AI Enterprise is optimised to streamline AI development and deployment. It comes with tested open-source containers and frameworks, certified to work on standard data center hardware and popular NVIDIA-Certified Systems equipped with NVIDIA L4 Tensor Core GPUs. Plus, it includes support, providing organizations with the benefits of open source transparency and the reliability of global NVIDIA Enterprise Support, offering expertise for both AI practitioners and IT administrators.
NVIDIA AI Enterprise software is an extra license for NVIDIA L4 Tensor Core GPUs, making high-performance AI available to almost any organization for training, inference, and data science tasks. When combined with NVIDIA L4, it simplifies creating an AI-ready platform, speeds up AI development and deployment, and provides the performance, security, and scalability needed to gain insights quickly and realize business benefits sooner.
Experience remarkable performance, scalability, and security for all tasks using the NVIDIA H100 Tensor Core GPU. The NVIDIA NVLink Switch System allows for connecting up to 256 H100 GPUs to boost exascale workloads. This GPU features a dedicated Transformer Engine to handle trillion-parameter language models. Thanks to these technological advancements, the H100 can accelerate large language models (LLMs) by an impressive 30X compared to the previous generation, establishing it as the leader in conversational AI.
NVIDIA H100 GPUs for regular servers include a five-year software subscription that encompasses enterprise support for the NVIDIA AI Enterprise software suite. This simplifies the process of adopting AI while ensuring top performance. It grants organizations access to essential AI frameworks and tools to create H100-accelerated AI applications, such as chatbots, recommendation engines, and vision AI. Take advantage of the NVIDIA AI Enterprise software subscription and its associated support for the NVIDIA H100.
The NVIDIA H100 GPUs feature fourth-generation Tensor Cores and the Transformer Engine with FP8 precision, solidifying NVIDIA's AI leadership by achieving up to 4X faster training and an impressive 30X speed boost for inference with large language models. In the realm of high-performance computing (HPC), the H100 triples the floating-point operations per second (FLOPS) for FP64 and introduces dynamic programming (DPX) instructions, resulting in a remarkable 7X performance increase. Equipped with the second-generation Multi-Instance GPU (MIG), built-in NVIDIA confidential computing, and the NVIDIA NVLink Switch System, the H100 provides secure acceleration for all workloads across data centers, ranging from enterprise to exascale.
The NVIDIA Pascal architecture enables the Tesla P100 to deliver superior performance for HPC and hyperscale workloads. With more than 21 teraflops of FP16 performance, Pascal is optimised to drive exciting new possibilities in deep learning applications. Pascal also delivers over 5 and 10 teraflops of double and single precision performance for HPC workloads.
The NVIDIA H100 is a crucial component of the NVIDIA data center platform, designed to enhance AI, HPC, and data analytics. This platform accelerates more than 3,000 applications and is accessible across various locations, from data centers to edge computing, providing substantial performance improvements and cost-saving possibilities.
Broadberry GPU Servers harness the processing power of NVIDIA Tesla graphics processing units for millions of applications such as image and video processing, computational biology and chemistry, fluid dynamics simulation, CT image reconstruction, seismic analysis, ray tracing, and much more.
As computing evolves, and processing moves from the CPU to co-processing between the CPU and GPU's, NVIDIA invented the CUDA parallel computing architecture to harness the performance benefits.
Speak to Broadberry GPU computing experts to find out more.
Accelerating scientific discovery, visualising big data for insights, and providing smart services to consumers are everyday challenges for researchers and engineers. Solving these challenges takes increasingly complex and precise simulations, the processing of tremendous amounts of data, or training sophisticated deep learning networks. These workloads also require accelerating data centres to meet the growing demand for exponential computing.
NVIDIA Tesla is the world's leading platform for accelerated data centres, deployed by some of the world's largest supercomputing centres and enterprises. It combines GPU accelerators, accelerated computing systems, interconnect technologies, development tools and applications to enable faster scientific discoveries and big data insights.
At the heart of the NVIDIA Tesla platform are the massively parallel PU accelerators that provide dramatically higher throughput for compute-intensive workloads - without increasing the power budget and physical footprint of data centres.
Traditionally servers are configured to use a CPU for processing - components which are built to handle a wide range of computing requirements and work perfectly for traditional applications such as email servers and storage servers. There are however a growing number of applications which benefit enormously from using a graphics card for processing.
A GPU server is a server configured with graphics cards which are built to harness the raw processing power of GPUs. Through utilising an offloading process, the CPU is able to send certain tasks to the GPUs and therefore greatly increasing server performance.
GPUs are designed to deal with anything thrown at them, thriving in the most computationally intense applications.
GPU dedicated servers are often used for fast 3D processing, error-free number crunching and accurate floating-point arithmetic where the design of graphical processing units allows them to run compute considerably faster than a CPU could. While they often operate at slower clock speeds than CPUs, GPUs can possess thousands of cores, allowing them to harness thousands of individual threads at the same time known as parallel computing.
In computationally intensive environments offloading tasks to a GPU is an excellent way minimise pressure on the CPU, mitigating any potential performance bottlenecks.
A significant number of the Big Data tasks which create business value involve constantly repeating the same operations. The huge number of cores available in GPU servers are conducive to this type of work. It is split up between processors to get through voluminous data sets at a faster rate.
GPU servers tend to use less energy in comparison to CPU-only based servers, providing long term reduction in TCO.
Broadberry GPU optimised servers feature up to 4TB of RAM and can be powered by the latest Intel Xeon Scalable processors or AMD EPYC series processors. With a massive range of GPU options available, Broadberry GPU dense servers can be configured with up to 10x NVIDIA Tesla GPU cards, the worlds leading platform for accelerating datacentres. Deployed by many of the planets largest supercomputing centres and enterprises, it utilises GPU accelerators, interconnect technologies, accelerated computing systems, development tools and applications to allow for faster scientific discoveries and big data insights.
At the centre of the NVIDIA Tesla platform is the hugely parallel GPU accelerators that deliver significantly higher throughput for compute-intensive workloads, without a subsequent rise in physical footprint of data centres or an increase in power consumption.
Broadberry GPU servers are built around industry-leading GPU-optimised server chassis which have been designed and rigorously tested to run up to 10x GPUs for massively parallel computing whilst keeping cool due to the latest advances in server cooling technology.
Our online configurator allows you to configure your GPU optimised server with a wide range of powerful processors, RAM options as well as SSD, NVMe or HDD storage options.
GPUs excel at performing massively parallel operations very quickly up to 10x quicker than their counterpart CPUs can. As a GPU is designed to perform parallel operations on multiple sets of data, they can quickly render high-resolution images and 3D video concurrently, analyse big sets of data faster or train your AI application. NVIDIA Tesla based GPU servers are also often used for non-graphical tasks, including scientific computation and machine learning.
The amount of GPUs that a GPU optimised server could be configured with used to be limited by three main factors the number of lanes on the CPU, physical space in the chassis, and the power that the systems power supply could provide. Working closely with our partners, Broadberrys GPU server range utilises the latest technical advances in the industry to allow up to 10x double with GPU cards in a system, or 20x single width cards.
Avant de quitter nos ateliers, toutes les solutions de serveur et de stockage Broadberry sont soumises à une procédure de test rigoureuse de 48 heures. Ceci, associé à un choix de composants de haute qualité, garantit que toutes nos serveurs et solutions de stockage répondent aux normes de qualité les plus strictes qui nous sont imposées.
Notre principal objectif est d'offrir des serveurs et des solutions de stockage de la plus haute qualité. Nous comprenons que chaque entreprise a des exigences différentes et sommes en mesure d'offrir une flexibilité inégalée dans la personnalisation et la conception de serveurs et de solutions de stockage.
Nous nous sommes imposés comme un incontournable fournisseur de stockage en Europe et fournissons depuis 1989 nos solutions de serveurs et de stockage aux plus grandes marques mondiales. Quelques exemples de clients :