Only Search Advanced HPC
  greenbulletHigh Performance Computing  
  greenbulletStorage Solutions  
  greenbulletCloud Infrastructure  
  greenbulletHigh Performance Servers  
  greenbulletData Storage  
  greenbulletNetworking and Infrastructure  
  greenbullet Company Information  
      Company Profile  
      Job Opportunities  
      Contact Us  
      Privacy Policy  
      Government & Education  
      Site Map  
  greenbulletGovernment & Education  
News 2011
University of Florida Physics Department installs 180TB RAID Storage System  
September 9, 2011  
The University of Florida Physics Department installed a 180TB NSR862 8U RAID Storage System. The system is a CI Design 62-slot Hot Swap SAS/SATA system using Dual Intel® Xeon® E5620 processors, 24GB DDR3 1333 Memory, Three LSI MegaRAID 9280-24i4e RAID Controllers, 3TB Seagate SAS Disk Drives and Mellanox ConnectX-2® Infiniband Adapter Card.   
Drexel University Chose Advanced HPC, Bright Cluster Manager to Help Unravel the Mysteries of the Universe and the World of Molecular Dynamics  
San Jose, CA - July 14, 2011  

Bright Computing, a leader in cluster management software, today announced that Drexel University is using Bright Cluster Manager® to manage its newly-installed HPC cluster from Advanced HPC. This system, funded by an NSF grant, is being used to perform research at two extremes of science: simulations of star-forming regions in space, including galactic nuclei, and molecular dynamics. As far apart as these studies appear, they are both based on similar techniques employing particle methods. In addition to research, the cluster is also used for teaching courses in high-performance computing and GPU programming.

Astrophysicist Dr. Steve McMillan is the Principal Investigator using the cluster. There are three co-investigators from multiple departments at Drexel: Dr. Cameron Abrams, Chemical and Biological Engineering; Dr. Jeremy Johnson, Computer Science; and Dr. Nagarajan Kandasamy, Electrical and Computer Engineering.

The team’s cluster is currently the most powerful compute resource at Drexel University, capable of delivering 176,514 GFLOPS of GPU performance. The system comprises 68,352 NVIDIA GPU cores and 48 TB RAID disk storage.

“I am impressed with the capabilities of the Advanced HPC team,” said Dr. McMillan. “They took the time to truly understand our requirements, and designed a system that was even better than our original specifications. We have an incredibly efficient cluster that exceeds our expectations, and has given us more bang for the buck.”

Drexel’s DRACO: By the numbers
No. of GPUs 144
No. of nodes 24
CPUs per node Two 6-core CPUs
GPUs per node Six GPUs
Peak Performance 176 TFLOPS
Power consumption 45 kW
Cost to deploy $400,000

Bright Cluster Manager is used for remote provisioning, monitoring and managing the system. Dr. McMillan and his team were amongst the first to use Bright Cluster Manager revision 5.2, with its full support of NVIDIA’s CUDA 4.0 and associated metrics.

“Bright Cluster Manager was a good choice for us,” stated Dr. McMillan. “I’m a scientist, not a system manager. The Bright GUI is intuitive and extremely easy to use. I can do everything I need to do on the system remotely — I almost never visit our cluster. I get alerts when attention is required, and am able to hand off much of the overall system management with confidence to an enthusiastic undergraduate.”

“From a support perspective, Advanced HPC and Bright are a great team,” added Dr. McMillan. “Initially we experienced a problem with some of our jobs, and they responded immediately. As it turned out, the issue was a driver problem. The Bright people worked directly with NVIDIA to solve it — sparing us the pain and getting us productive again in a timely fashion.”

About Bright Computing

Bright Computing is a specialist in cluster management software and services for high-performance computing (HPC). Its flagship product — Bright Cluster Manager — with its intuitive graphical user interface and powerful cluster management shell, makes clusters of any size easy to install, use and manage, including systems combining Intel/AMD processors with GPGPU technology. Bright's minimal footprint enables HPC systems to be utilized to their maximum potential, from departmental clusters to large-scale systems. Bright Cluster Manager is the management solution of choice for many research institutes, universities, and companies across the world, including several Top500 installations. Bright Cluster Manager is an official Intel Cluster Ready component and fully complies with the Intel® Cluster Ready specification.

Advanced HPC is Title Sponsor of UCSD Campus LISA Show 2011  
July 13, 2011  

For the second year in a row, Advanced HPC was Title Sponsor for UCSD's Campus LISA show.   The show is an annual conference  that provides training, updates and information to help SysAdmins get their job done more efficiently and more adeptly.

The one-day conference features presentations by both UC San Diego technical staff and invited guests, offering relevant talks on emerging technologies, campus networks and IT services, and useful how-to sessions.
Advanced HPC partnered with Tim Dales  and Paul Voss from SolarFlare to showcase their line of 10 Gigabit ethernet host bus adapters.

Pedram Hariri and Kashif Shaikh from Force10 presented their 48-port 10-gigabit ethernet switch.  They also raffled off an iPad2.

Thank you to OCZ Technology for donating an AHPC surfboard for raffle while they showcased their line of solid state drives.

Solarflare Signs Partnership Agreement with Advanced HPC  
Irvine, CA - February 14, 2011  

Leading 10 Gigabit Ethernet provider enhances channel to help higher education, government, defense and private firms build low-latency network clusters

Solarflare Communications, the company pioneering 10 Gigabit Ethernet (10GbE), today announced a reseller agreement with Advanced HPC, Inc., a leading provider of high-performance computing platforms and Linux clusters, as well as data storage and backup solutions. Targeting corporate and public-sector customers, Advanced HPC will enhance Solarflare's ability to reach an important segment of end customers and IT buyers in the education, government and defense sectors. Together, the companies will enable organizations to build low-latency network clusters to address the latest performance and bandwidth demands.

"In order to help capture 10GbE server adapter market share, locate valuable customers worldwide and grow revenue, we've sourced strong reseller partners that focus on high performance computing and data storage practices," said Mike Smith, vice president and general manager of host solutions at Solarflare. "We are happy to continue to expand our long-term strategic partnerships and reach customers through our new relationship with Advanced HPC."

Headquartered in San Diego, Calif., Advanced HPC is a leading provider of innovative IT solutions with a special focus on high-performance computing platforms. Its line of products include Linux cluster solutions, high-performance servers and workstations, disk storage products such as Storage Area Networks (SAN), Network Attached Storage (NAS), and Direct Attached Storage (DAS), tape backup solutions from external tape drive to enterprise-class libraries, disk-to-disk-to-tape solutions and data deduplication solutions, backup software and virtualization solutions. With over 50 years of experience in the technology industry, Advanced HPC ensures that customers get the best solution to fit their needs.

"We view Solarflare as a key strategic partner, as the company's compelling product portfolio opens the door to a multitude of opportunities," said Joe Lipman, vice president of sales at Advanced HPC. "Our customers are looking for high-speed networking solutions that provide low-latency results and Solarflare provides the performance we require to meet these new demands."

First launched in September 2010 with one master VAR partner, Solarflare's channel program has grown 90 percent in 6 months and has added 19 partners spanning various market segments worldwide. Built as a tiered program, Platinum, Gold and Silver, Solarflare provides its VAR partners with a unique combination of features and benefits based on their primary business needs to help them generate and close new business at an accelerate pace. The company also offers partners direct support from its technical support and sales channel team.

Partners can quote and purchase Solarflare's SFN5000 line of 10GbE server adapters through the company's two-tiered distribution channel. Solarflare's products deliver the industry's highest-performance, lowest-latency (4 microseconds), and lowest-power (2.5W per port) solutions on both 10GBASE-T for installed twisted pair copper cabling and SFP+ for installed optical fiber. It also delivers five times the application performance of other server adapters by accelerating virtual I/O for Citrix XenServer, Microsoft Hyper V and VMware.

About Solarflare Communications, Inc.

Solarflare Communications is the leading provider of 10 Gigabit Ethernet (10GbE) silicon and server adapters. Solarflare's robust and power-efficient solutions are cost effective and easy to deploy. Ready for primetime, Solarflare 10GbE products make possible next-generation applications such as low-latency networking (with OpenOnload) for high frequency trading applications, cloud computing, server virtualization, and network convergence. Solarflare 10GbE adapters have proven their performance in testing conducted by Securities Technology Analysis Center (STAC) Research with Cisco switches and IBM servers. Solarflare silicon can be found in switches, adapters and test equipment shipping from Dell, SMC Networks and others. The company has announced partnerships with Arista Networks, Citrix, Cloudsoft, CommScope, Delta Networks, Panduit, SR Labs and VMware. Solarflare is headquartered in Irvine, Calif., has an R&D site in Cambridge, England, and sales offices in Taiwan and China. For more information, visit

About Advanced HPC, Inc.

Advanced HPC is a provider of HPC platforms and Linux clusters. Their product scope spans Linux cluster solutions, high-performance servers and workstations, disk storage products such as Storage Area Networks (SAN), Network Attached Storage (NAS), and Direct Attached Storage (DAS), tape backup solutions from external tape drive to enterprise-class libraries, disk-to-disk-to-tape solutions and data deduplication solutions, backup software and virtualization solutions. Advanced HPC's team has over 50 years of experience in the technology industry and their staff includes industry veterans in sales, customer service and production to ensure that customers get the best solution to fit their needs. For more information, visit

Advanced HPC's 48-core node integral in University of Pennsylvania's Research in Genomics and Bioinformatics  
January 28, 2011  

PGFI fosters collaborations and scientific exchange across biology, veterinary medicine, pharmacology, medicine, genetics,
microbiology, engineering, physics, chemistry and psychology. Penn scientists depend on high-performance computing (HPC) to further their discoveries in genomics and bioinformatics, and achieving that level of power requires pushing the technology envelope. The challenge is not just capacity and performance, as is often the case, but the massive I/O requirements must be met because of the large number of small files being processed. PGFI’s HPC group has recently installed an Advanced HPC 48-core node with 256GB of shared memory capable of supporting up to 512GB for large memory requirements. This new system will help propel PGFI’s research and deliver a large memory resource for specific scientific applications. Working together with Advanced  HPC and AMD proved to be successful in deploying AMD ‘s latest and greatest  technology.

We look forward to working with both Advanced HPC and AMD on future projects.


PENN Genome Frontiers Institute
Systems Programmer Sr.

For more about AMD processors, click here.

For more about Advanced HPC's Mercury Rackmount Server Line, click here.

Advanced HPC and AMD Empower the National Severe Storms Laboratory  
January 15, 2011 – Norman, Oklahoma  

NOAA has installed a Quad processor, 48 Core Mercury compute node in their National Severe Storms Laboratory thanks to a NOAA-NESDIS grant and donation by Advanced HPC and AMD.

The Mercury server will be used in the development of numerical models of thunderstorms. This computing resource will help NSSL researchers work towards the goal of issuing warnings based on forecasts. Currently warnings are issued only after threatening weather conditions are detected. The ability to forecast and provide warning prior to the threatening weather being detected will aid in promoting public safety.

The system will first be used to develop techniques to ingest lightning data into numerical forecast models. The system is also being used for development of Ensemble Kalman Filter (EnKF) techniques for assimilation of Doppler radar data into storm-scale models. Future projects will use the system to test different physical processes, including cloud microphysics and storm electricity in NSSL's 3-D cloud model, the Collaborative Model for Multiscale Atmospheric Simulation (COMMAS).


University of California Irvine uses HPC resources for Neutrino Telescope Project  
January 5, 2011 – Irvine, California  

We are working with the largest neutrino telescopes in the world, AMANDA and IceCube neutrino telescopes. The AMANDA project has been recently completed producing data for about 10 years with a rate of 30TB/year.

IceCube is almost completed and currently produces 300TB/year. Running this experiment for the next 10-15 years will require data storages of tens of PBs. Indeed, the data is not only limited on the experimental production, but also such an experiment generate equivalent amount of simulated Monte Carlo data. Therefore, to face these data and computing challenges we need current state of the art HPC computing resources.

Recently, we have researched a large varieties of new generation computing nodes, and we concluded that Advanced HPC produces the most reliable computing resources to accommodate our scientific research goals.

The plot represents current experimental limits and constraints on plausible point sources fluxes of high energy neutrinos of extragalactic origin. The main goal of neutrino telescopes is discovering and observing neutrino sources with galactic and extragalactic origin.

- System Admin, UCI

More News:  
News 2010  

  Home | Products | Support | Company | News | PartnersPrivacy | Contact Us

Last updated 4/11/2016