понедельник, 21 декабря 2009 г.

Мои ссылки в <a href="http://twitter.com">http://twitter.com</a>

inomarka_twitte
Это ты!

* Списки

Списках:

* @ inomarka_twitte /My-E-NET-SIMBIOS-WEB-новости

1.

http://bit.ly/jrTiS -... http://ff.im/bw0FY 5:40 вечера 15 ноября от FriendFeed
* Удалить
2.

http://text-to-speech.imtranslator.net 8:17 вечера 8 ноября из Сети
* Удалить
3.

Метамфетамин - Просмотр Capture -- http://shar.es/1zQmV 11:42 утра 19 октября от ShareThis.com
* Удалить
4.

% 7BPermalink% 7D через @AddThis 7:46 вечера 13 октябрь из Сети
* Удалить
5.

Метамфетамин - Просмотр Capture -- http://shar.es/1l3Pg 11:50 вечера 11 октября от ShareThis.com
* Удалить
6.

Марка щебетать новый стиль - Открыть Capture -- http://shar.es/1YHn5 7:50 вечера 6 октября от ShareThis.com

раздел страницы:

"inomarka_twitte Это ты! Списки Списках: @ inomarka_twitte /My-E-NET-SIMBIOS-WEB-новости    http://bit.ly/jrTiS -... http://ff.im/bw0FY 5:40 вечера 15 ноября от FriendFeed Удалить    http://text-to-speech.imtranslator.net 8:17 вечера 8 ноября из Сети Удалить    Метамфетамин - Просмотр Capture -- http://shar.es/1zQmV 11:42 утра 19 октября от ShareThis.com Удалить    % 7BPermalink% 7D через @AddThis 7:46 вечера 13 октябрь из Сети Удалить    Метамфетамин - Просмотр Capture -- http://shar.es/1l3Pg 11:50 вечера 11 октября от ShareThis.com Удалить    Марка щебетать новый стиль - Открыть Capture -- http://shar.es/1YHn5 7:50 вечера 6 октября от ShareThis.com"
- Кирилл Кирилин (inomarka_twitte) на Twitter (открыть в Google ВикиКомментариях)

среда, 2 декабря 2009 г.

Викикомментарий пользователя Кирилл

Внимание: Только установка Userscripts которым вы доверяете.
Userscripts и Firefox

Greasemonkey это расширение для Mozilla Firefox, Open Source Web Browser. Большинство userscripts написаны для Firefox & Greasemonkey (хотя некоторые работают в Opera, Safari и даже Internet Explorer).

В этом руководстве я предполагаю, что вы используете Firefox, если вы не должно Установить Firefox первый.
Userscripts запускается через Greasemonkey

Теперь у вас есть Firefox, Вам нужно установить Greasemonkey. После установки (которая требует перезагрузки браузера), вы готовы к установке userscripts.

Теперь щелчком мыши на. Ссылке user.js триггеров для Greasemonkey всплывающие панели установки скрипта. Greasemonkey показывает вам список того, что сайты сценарий будет работать на и спросить, если вы хотите, чтобы установить скрипт.

Теперь загрузка веб-страницы в результатах дополнительного кода (UserScript) запускаются.
Другие браузеры

Пользовательские скрипты могут быть использованы в браузерах Firefox, чем другие, но API для сценариев и браузером JavaScript поддержки различны.

* Opera userscripts: Опера имеет собственный UserScript API с различной функциональностью, чем Greasemonkey's. Однако он признает Greasemonkey скрипты. и многие скрипты на этом сайте делать работу в опере. Видеть здесь за руководство по установке скриптов в театре оперы и здесь Более подробную информацию о которых Greasemonkey скрипты могут работать.

Be Careful

Любой пользователь может загрузить сценарии для этого сайта. Сообщество функций, таких как форумы и отзывы предоставляются для защиты сайта и общественности. Руководствоваться здравым смыслом при установке исходного кода. Если вы не знаете, как прочитать ее, проверить, кто рассматривал, Любимые и обсудили его. Решите, если вы их найдете надежного читая их другие отзывы. Администраторы не может гарантировать, что "зло" сценарии не будут перечислены на сайте.
Отказ от гарантий

НЕТ ГАРАНТИЙ ДЛЯ сценарии представленные на этом сайте, ПО МЕРЕ РАЗРЕШЕНО ЗАКОНОМ. ЕСЛИ ИНОЕ НЕ УКАЗАНО В ПИСЬМЕННОЙ ФОРМЕ, ДЕРЖАТЕЛИ АВТОРСКИХ ПРАВ И / ИЛИ ДРУГИЕ СТОРОНЫ ПОСТАВЛЯЮТ ПРОГРАММУ "КАК ОНА ЕСТЬ" БЕЗ КАКИХ-ЛИБО ЯВНЫХ ИЛИ ПОДРАЗУМЕВАЕМЫХ, ВКЛЮЧАЯ, НО НЕ ОГРАНИЧИВАЯСЬ, ПОДРАЗУМЕВАЕМЫХ ГАРАНТИЙ КОММЕРЧЕСКОЙ ЦЕННОСТИ И ПРИГОДНОСТИ ДЛЯ ОПРЕДЕЛЕННОЙ ЦЕЛИ . ВЕСЬ РИСК В ОТНОШЕНИИ КАЧЕСТВА И ПРОИЗВОДИТЕЛЬНОСТИ PROGRAMIS ПРИ ВАС. СЛУЧАЕ ЕСЛИ В ПРОГРАММЕ БУДУТ ОБНАРУЖЕНЫ НЕДОСТАТКИ,

раздел страницы:

"Внимание: Только установка Userscripts которым вы доверяете. Userscripts и Firefox Greasemonkey это расширение для Mozilla Firefox, Open Source Web Browser. Большинство userscripts написаны для Firefox & Greasemonkey (хотя некоторые работают в Opera, Safari и даже Internet Explorer). В этом руководстве я предполагаю, что вы используете Firefox, если вы не должно Установить Firefox первый. Userscripts запускается через Greasemonkey Теперь у вас есть Firefox, Вам нужно установить Greasemonkey. После установки (которая требует перезагрузки браузера), вы готовы к установке userscripts. Теперь щелчком мыши на. Ссылке user.js триггеров для Greasemonkey всплывающие панели установки скрипта. Greasemonkey показывает вам список того, что сайты сценарий будет работать на и спросить, если вы хотите, чтобы установить скрипт. Теперь загрузка веб-страницы в результатах дополнительного кода (UserScript) запускаются. Другие браузеры Пользовательские скрипты могут быть использованы в браузерах Firefox, чем другие, но API для сценариев и браузером JavaScript поддержки различны. Opera userscripts: Опера имеет собственный UserScript API с различной функциональностью, чем Greasemonkey's. Однако он признает Greasemonkey скрипты. и многие скрипты на этом сайте делать работу в опере. Видеть здесь за руководство по установке скриптов в театре оперы и здесь Более подробную информацию о которых Greasemonkey скрипты могут работать. Be Careful Любой пользователь может загрузить сценарии для этого сайта. Сообщество функций, таких как форумы и отзывы предоставляются для защиты сайта и общественности. Руководствоваться здравым смыслом при установке исходного кода. Если вы не знаете, как прочитать ее, проверить, кто рассматривал, Любимые и обсудили его. Решите, если вы их найдете надежного читая их другие отзывы. Администраторы не может гарантировать, что "зло" сценарии не будут перечислены на сайте. Отказ от гарантий НЕТ ГАРАНТИЙ ДЛЯ сценарии представленные на этом сайте, ПО МЕРЕ РАЗРЕШЕНО ЗАКОНОМ. ЕСЛИ ИНОЕ НЕ УКАЗАНО В ПИСЬМЕННОЙ ФОРМЕ, ДЕРЖАТЕЛИ АВТОРСКИХ ПРАВ И / ИЛИ ДРУГИЕ СТОРОНЫ ПОСТАВЛЯЮТ ПРОГРАММУ "КАК ОНА ЕСТЬ" БЕЗ КАКИХ-ЛИБО ЯВНЫХ ИЛИ ПОДРАЗУМЕВАЕМЫХ, ВКЛЮЧАЯ, НО НЕ ОГРАНИЧИВАЯСЬ, ПОДРАЗУМЕВАЕМЫХ ГАРАНТИЙ КОММЕРЧЕСКОЙ ЦЕННОСТИ И ПРИГОДНОСТИ ДЛЯ ОПРЕДЕЛЕННОЙ ЦЕЛИ . ВЕСЬ РИСК В ОТНОШЕНИИ КАЧЕСТВА И ПРОИЗВОДИТЕЛЬНОСТИ PROGRAMIS ПРИ ВАС. СЛУЧАЕ ЕСЛИ В ПРОГРАММЕ БУДУТ ОБНАРУЖЕНЫ НЕДОСТАТКИ, ВЫ ПРИНИМАЕТЕ НА СЕБЯ СТОИМОСТЬ ВСЕГО НЕОБХОДИМОГО ОБСЛУЖИВАНИЯ, ВОССТАНОВЛЕНИЯ ИЛИ ИСПРАВЛЕНИЯ. Ограничение ответственности НИ В КОЕМ СЛУЧАЕ, ЕСЛИ НЕ ТРЕБУЕТСЯ ЗАКОНОВ ПО ИСПОЛЬЗОВАНИЮ ИЛИ СОГЛАСИЯ В ПИСЬМЕННОЙ ФОРМЕ, НЕ БУДЕТ НИ ОДИН ВЛАДЕЛЕЦ COPYRIGHT ИЛИ ДРУГОЕ ЛИЦО, КОТОРОЕ МОЖЕТ ИЗМЕНЯТЬ И / ИЛИ РАСПРОСТРАНЯТЬ ПРОГРАММУ, КАК БЫЛО РАЗРЕШЕНО ВЫШЕ, НЕ НЕСЕТ ОТВЕТСТВЕННОСТИ ПЕРЕД ВАМИ ЗА УБЫТКИ, ВКЛЮЧАЯ ЛЮБЫЕ ОБЩИЕ, СПЕЦИАЛЬНЫЕ, СЛУЧАЙНЫЕ ИЛИ КОСВЕННЫЕ УБЫТКИ В РЕЗУЛЬТАТЕ ИСПОЛЬЗОВАНИЯ ИЛИ НЕВОЗМОЖНОСТИ ИСПОЛЬЗОВАНИЯ ПРОГРАММЫ (ВКЛЮЧАЯ, НО НЕ ОГРАНИЧИВАЯСЬ, ПОТЕРЮ ДАННЫХ ИЛИ ИХ УБЫТКИ ИСКАЖЕНИЕ или поддерживаемых ВЫ ИЛИ ТРЕТЬЯ СТОРОНА, ИЛИ ОТКАЗОМ ПРОГРАММЫ РАБОТАТЬ СОВМЕСТНО С ЛЮБЫМИ ДРУГИМИ ПРОГРАММАМИ), ДАЖЕ ЕСЛИ ТАКОЙ ДЕРЖАТЕЛЬ ИЛИ ДРУГАЯ СТОРОНА БЫЛА ПРЕДУПРЕЖДЕНА О ВОЗМОЖНОСТИ ТАКИХ УБЫТКОВ."
- Установка Greasemonkey Scripts - Userscripts.org (открыть в Google ВикиКомментариях)

четверг, 19 ноября 2009 г.

Google Explorer

Появилась возможность добабавить в Internet Explorer, начиная с шестой версии ивыше, улученную поддержку HTML 5 и ускорить обработку JаvaScript. Google Chrome Frame также обеспечивает полную совместимость с сервисами Google.
http://code.google.com/chrome/chromeframe

Информация предоставлена журналом CHIP №11 ноябрь 2009
www.ichip.ru
Дополнительная информацию пресылаете на мой электронный адрес
inomarka@ovi.com
Posted by Picasa

среда, 18 ноября 2009 г.

понедельник, 16 ноября 2009 г.


Monday Nov 02, 2009

A Sun Ultra 27 workstation configured with an nVidia FX5800 graphics card delivered outstanding performance running the SPECviewperf® 10 benchmark.
  • When compared with other workstations running a single graphics card (i.e. not running two or more cards in SLI mode), the Sun Ultra 27 workstation places first in 6 of 8 subtests and second in the remaining two subtests.
  • The calculated geometric mean shows that Sun Ultra 27 workstation is 11% faster than competitor's workstations.
  • The optimum point for price/performance is the nVidia FX1800 graphics card.
Results have been published on the SPEC web site at http://www.spec.org/gwpg/gpc.data/vp10/summary.html.

Performance Landscape

Performance of the Sun Ultra 27 versus the competition. Bigger is better for each of the eight tests. The comparison is based upon the performance of the Sun Ultra 27 workstation. Performance is measured in frames per second.

3DSMAX CATIA ENSIGHT MAYA
Perf % Perf % Perf % Perf %
Sun Ultra 27 FX5800 59.34
68.81
58.07
246.09
HP xw4600 ATI FireGL V7700 49.71 19 48.05 43 57.11 2
268.62 -8
HP xw4600 FX4800 52.26 14 63.26 12 53.79 8
226.82 7
Fujtsu Celsius M470 FX3800 53.67 11 65.25 7 52.19 10 227.37 7

PROENGINEER SOLIDWORKS TEAMCENTER UGS
Perf % Perf % Perf % Perf %
Sun Ultra 27 FX5800 68.96
152.01
42.02
36.04
HP xw4600 ATI FireGL V7700 47.25 32 109.71 28 40.18 4 56.65 -57
HP xw4600 FX4800 61.15 11 131.31 14 28.42 32 33.43 7
Fujtsu Celsius M470 FX3800 64.39 7
139.2 8 29.02 31 33.27 8
Comparison of various frame buffers on the Sun Ultra 27 running SPECviewperf 10. Performance is reported for each test along with the difference in performance as compared to the FX5800 frame buffer. The runs in the table below were made with 3.2GHz W3570 processors.


3DSMAX CATIA ENSIGHT MAYA PROENGR SOLIDWRKS TEAMCNTR UGS
Perf % Perf % Perf % Perf % Perf % Perf % Perf % Perf %
FX5800 57.07
67.84
58.63
219.4
68.05
152.3
40.85
34.73
FX3800 57.17 0 66.57 2
54.91 7
206.4 6 66.48 2 146.3 4 38.48 6 33.12 5
FX1800 56.73 1
64.33 6
52.05 13 189.3 16 64.67 5 135.2 13 34.18 20
30.46 14
FX380 45.90 24 55.81 22 34.93 68 120.3 82 46.09 48 64.11 138 17.00 140 13.88 150

Results and Configuration Summary

Hardware Configuration:

    Sun Ultra 27 Workstation
    1 x 3.33 GHz Intel Xeon (tm) W3580
    2GB (1 x 2GB PC10600 1333MHz)
    1 x 500GB SATA
    nVidia Quadro FX380, FX1800, FX3800 & FX5800
    $7,529.00 (includes Microsoft Windows and monitor)
Software Configuration:

    OS: Microsoft Windows Vista Ultimate, 32-bit
    Benchmark: SPECviewperf 10

Benchmark Description

SPECviewperf measures 3D graphics rendering performance of systems running under OpenGL. SPECviewperf is a synthetic benchmark designed to be a predictor of application performance and a measure of graphics subsystem performance. It is a measure of graphics subsystem performance (primarily graphics bus, driver and graphics hardware) and its impact on the system without the full overhead of an application. SPECviewperf reports performance in frames per second.
Please go here for a more complete description of the tests.

Key Points and Best Practices

SPECviewperf measures the 3D rendering performance of systems running under OpenGL.
The SPECopcSM project group's SPECviewperf 10 is totally new performance evaluation software. In addition to features found in previous versions, it now provides the ability to compare performance of systems running in higher-quality graphics modes that use full-scene anti-aliasing, and measures how effectively graphics subsystems scale when running multithreaded graphics content. Since the SPECviewperf source and binaries have been upgraded to support changes, no comparisons should be made between past results and current results for viewsets running under SPECviewperf 10.
SPECviewperf 10 requires OpenGL 1.5 and a minimum of 1GB of system memory. It currently supports Windows 32/64.

See Also

Disclosure Statement

SPEC® and the benchmark name SPECviewperf® are registered trademarks of the Standard Performance Evaluation Corporation. Competitive benchmark results stated above reflect results published on www.spec.org as of Oct 18, 2009. For the latest SPECviewperf benchmark results, visit www.spec.org/gwpg.
Comments:

Post a Comment:
  • HTML Syntax: NOT allowed
  • Please answer this simple math question
    3 + 6 =
This blog copyright 2009 by John Henning

BestPerf

BestPerf

Thursday Nov 05, 2009

TPC-C Sun SPARC Enterprise T5440 with Oracle RAC World Record Database Result

Sun and Oracle demonstrate the World's fastest database performance. Sun Microsystems using 12 Sun SPARC Enterprise T5440 servers, 60 Sun Storage F5100 Flash arrays and Oracle 11g Enterprise Edition with Real Application Clusters and Partitioning delivered a world-record TPC-C benchmark result.

  • The 12-node Sun SPARC Enterprise T5440 server cluster result delivered a world record TPC-C benchmark result of 7,646,486.7 tpmC and $2.36 $/tpmC (USD) using Oracle 11g R1 on a configuration available 12/14/09.

  • The 12-node Sun SPARC Enterprise T5440 server cluster beats the performance of the IBM Power 595 (5GHz) with IBM DB2 9.5 database by 26% and has 16% better price/performance on the TPC-C benchmark.

  • The complete Oracle/Sun solution used 10.7x better computational density than the IBM configuration (computational density = performance/rack).

  • The complete Oracle/Sun solution used 8 times fewer racks than the IBM configuration.

  • The complete Oracle/Sun solution has 5.9x better power/performance than the IBM configuration.

  • The 12-node Sun SPARC Enterprise T5440 server cluster beats the performance of the HP Superdome (1.6GHz Itanium2) by 87% and has 19% better price/performance on the TPC-C benchmark.

  • The Oracle/Sun solution utilized Sun FlashFire technology to deliver this result. The Sun Storage F5100 flash array was used for database storage.

  • Oracle 11g Enterprise Edition with Real Application Clusters and Partitioning scales and effectively uses all of the nodes in this configuration to produce the world record performance.

  • This result showed Sun and Oracle's integrated hardware and software stacks provide industry-leading performance.

More information on this benchmark will be posted in the next several days.

Performance Landscape

TPC-C results (sorted by tpmC, bigger is better)


System
tpmC Price/tpmC Avail Database Cluster Racks w/KtpmC
12 x Sun SPARC Enterprise T5440 7,646,487 2.36 USD 12/14/09 Oracle 11g RAC Y 9 9.6
IBM Power 595 6,085,166 2.81 USD 12/10/08 IBM DB2 9.5 N 76 56.4
HP Integrity Superdome 4,092,799 2.93 USD 08/06/07 Oracle 10g R2 N 46 to be added

Avail - Availability date
w/KtmpC - Watts per 1000 tpmC
Racks - clients, servers, storage, infrastructure

Sun and IBM TPC-C Response times


System
tpmC

Response Time

New Order 90th%

Response Time

New Order Average

12 x Sun SPARC Enterprise T5440 7,646,487 0.170 0.168
IBM Power 595 6,085,166 1.69
1.22
Response Time Ratio - Sun Better

9.9x 7.3x

Sun uses 7x comparison to highlight the differences in response times between Sun's solution and IBM. Although notice that Sun is 10x faster on New Order transactions that finish in the 90% percentile.

It is also interesting to note that none of Sun's response times, avg or 90th percentile, for any transaction is over 0.25 seconds. While IBM does not have even one interactive transaction, not even the menu, below 0.50 seconds. Graphs of Sun's and IBM's response times for New-Order can be found in the full disclosure reports on TPC's website TPC-C Official Result Page.

Results and Configuration Summary

Hardware Configuration:

    9 racks used to hold

    Servers:
      12 x Sun SPARC Enterprise T5440
      4 x 1.6 GHz UltraSPARC T2 Plus
      512 GB memory
      10 GbE network for cluster
    Storage:
      60 x Sun Storage F5100 Flash Array
      61 x Sun Fire X4275, Comstar SAS target emulation
      24 x Sun StorageTek 6140 (16 x 300 GB SAS 15K RPM)
      6 x Sun Storage J4400
      3 x 80-port Brocade FC switches
    Clients:
      24 x Sun Fire X4170, each with
      2 x 2.53 GHz X5540
      48 GB memory

Software Configuration:

    Solaris 10 10/09
    OpenSolaris 6/09 (COMSTAR) for Sun Fire X4275
    Oracle 11g Enterprise Edition with Real Application Clusters and Partitioning
    Tuxedo CFS-R Tier 1
    Sun Web Server 7.0 Update 5

Benchmark Description

TPC-C is an OLTP system benchmark. It simulates a complete environment where a population of terminal operators executes transactions against a database. The benchmark is centered around the principal activities (transactions) of an order-entry environment. These transactions include entering and delivering orders, recording payments, checking the status of orders, and monitoring the level of stock at the warehouses.

See Also

Disclosure Statement

TPC Benchmark C, tpmC, and TPC-C are trademarks of the Transaction Performance Processing Council (TPC). 12-node Sun SPARC Enterprise T5440 Cluster (1.6GHz UltraSPARC T2 Plus, 4 processor) with Oracle 11g Enterprise Edition with Real Application Clusters and Partitioning, 7,646,486.7 tpmC, $2.36/tpmC. Available 12/14/09. IBM Power 595 (5GHz Power6, 32 chips, 64 cores, 128 threads) with IBM DB2 9.5, 6,085,166 tpmC, $2.81/tpmC, available 12/10/08. HP Integrity Superdome(1.6GHz Itanium2, 64 processors, 128 cores, 256 threads) with Oracle 10g Enterprise Edition, 4,092,799 tpmC, $2.93/tpmC. Available 8/06/07. Source: www.tpc.org, results as of 11/5/09.

Monday Nov 02, 2009

A Sun Ultra 27 workstation configured with an nVidia FX5800 graphics card delivered outstanding performance running the SPECviewperf® 10 benchmark.

  • When compared with other workstations running a single graphics card (i.e. not running two or more cards in SLI mode), the Sun Ultra 27 workstation places first in 6 of 8 subtests and second in the remaining two subtests.

  • The calculated geometric mean shows that Sun Ultra 27 workstation is 11% faster than competitor's workstations.

  • The optimum point for price/performance is the nVidia FX1800 graphics card.

Results have been published on the SPEC web site at http://www.spec.org/gwpg/gpc.data/vp10/summary.html.

Performance Landscape

Performance of the Sun Ultra 27 versus the competition. Bigger is better for each of the eight tests. The comparison is based upon the performance of the Sun Ultra 27 workstation. Performance is measured in frames per second.


3DSMAX CATIA ENSIGHT MAYA
Perf % Perf % Perf % Perf %
Sun Ultra 27 FX5800 59.34
68.81
58.07
246.09
HP xw4600 ATI FireGL V7700 49.71 19 48.05 43 57.11 2
268.62 -8
HP xw4600 FX4800 52.26 14 63.26 12 53.79 8
226.82 7
Fujtsu Celsius M470 FX3800 53.67 11 65.25 7 52.19 10 227.37 7

PROENGINEER SOLIDWORKS TEAMCENTER UGS
Perf % Perf % Perf % Perf %
Sun Ultra 27 FX5800 68.96
152.01
42.02
36.04
HP xw4600 ATI FireGL V7700 47.25 32 109.71 28 40.18 4 56.65 -57
HP xw4600 FX4800 61.15 11 131.31 14 28.42 32 33.43 7
Fujtsu Celsius M470 FX3800 64.39 7
139.2 8 29.02 31 33.27 8

Comparison of various frame buffers on the Sun Ultra 27 running SPECviewperf 10. Performance is reported for each test along with the difference in performance as compared to the FX5800 frame buffer. The runs in the table below were made with 3.2GHz W3570 processors.


3DSMAX CATIA ENSIGHT MAYA PROENGR SOLIDWRKS TEAMCNTR UGS
Perf % Perf % Perf % Perf % Perf % Perf % Perf % Perf %
FX5800 57.07
67.84
58.63
219.4
68.05
152.3
40.85
34.73
FX3800 57.17 0 66.57 2
54.91 7
206.4 6 66.48 2 146.3 4 38.48 6 33.12 5
FX1800 56.73 1
64.33 6
52.05 13 189.3 16 64.67 5 135.2 13 34.18 20
30.46 14
FX380 45.90 24 55.81 22 34.93 68 120.3 82 46.09 48 64.11 138 17.00 140 13.88 150

Results and Configuration Summary

Hardware Configuration:

    Sun Ultra 27 Workstation
    1 x 3.33 GHz Intel Xeon (tm) W3580
    2GB (1 x 2GB PC10600 1333MHz)
    1 x 500GB SATA
    nVidia Quadro FX380, FX1800, FX3800 & FX5800
    $7,529.00 (includes Microsoft Windows and monitor)

Software Configuration:

    OS: Microsoft Windows Vista Ultimate, 32-bit
    Benchmark: SPECviewperf 10

Benchmark Description

SPECviewperf measures 3D graphics rendering performance of systems running under OpenGL. SPECviewperf is a synthetic benchmark designed to be a predictor of application performance and a measure of graphics subsystem performance. It is a measure of graphics subsystem performance (primarily graphics bus, driver and graphics hardware) and its impact on the system without the full overhead of an application. SPECviewperf reports performance in frames per second.

Please go here for a more complete description of the tests.

Key Points and Best Practices

SPECviewperf measures the 3D rendering performance of systems running under OpenGL.

The SPECopcSM project group's SPECviewperf 10 is totally new performance evaluation software. In addition to features found in previous versions, it now provides the ability to compare performance of systems running in higher-quality graphics modes that use full-scene anti-aliasing, and measures how effectively graphics subsystems scale when running multithreaded graphics content. Since the SPECviewperf source and binaries have been upgraded to support changes, no comparisons should be made between past results and current results for viewsets running under SPECviewperf 10.

SPECviewperf 10 requires OpenGL 1.5 and a minimum of 1GB of system memory. It currently supports Windows 32/64.

See Also

Disclosure Statement

SPEC® and the benchmark name SPECviewperf® are registered trademarks of the Standard Performance Evaluation Corporation. Competitive benchmark results stated above reflect results published on www.spec.org as of Oct 18, 2009. For the latest SPECviewperf benchmark results, visit www.spec.org/gwpg.

Tuesday Oct 13, 2009

The Oracle BI EE workload was run on two Sun SPARC Enterprise T5440 servers and acheived world record performance.
  • Two Sun SPARC Enterprise T5440 servers with four 1.6 GHz UltraSPARC T2 Plus processors delivered the best performance of 50K concurrent users on the Oracle BI EE 10.1.3.4 benchmark with Oracle 11g database running on free and open Solaris 10.

  • The two node Sun SPARC Enterprise T5440 servers with Oracle BI EE running on Solaris 10 using 8 Solaris Containers shows 1.8x scaling over Sun's previous one node SPARC Enterprise T5440 server result with 4 Solaris Containers.

  • The two node SPARC Enterprise T5440 servers demonstrated the performance and scalability of the UltraSPARC T2 Plus processor demonstrating 50K users can be serviced with 0.2776 sec response time.

  • The Sun SPARC Enterprise T5220 server was used as an NFS server with 4 internal SSDs and the ZFS file system which showed significant I/O performance improvement over traditional disk for Business Intelligence Web Catalog activity.

  • IBM has not published any POWER6 processor based results on this important benchmark.

Performance Landscape

System Processors Users
Chips GHz Type
2 x Sun SPARC Enterprise T5440 8 1.6 UltraSPARC T2 Plus 50,000
1 x Sun SPARC Enterprise T5440 4 1.6 UltraSPARC T2 Plus 28,000
5 x Sun Fire T2000 1 1.2 UltraSPARC T1 10,000

Results and Configuration Summary

Hardware Configuration:

    2 x Sun SPARC Enterprise T5440 (1.6GHz/128GB)
    1 x Sun SPARC Enterprise T5220 (1.2GHz/64GB) and 4 SSDs (used as NFS server)

Software Configuration:

    Solaris10 05/09
    Oracle BI EE 10.1.3.4
    Oracle 11gR1

Benchmark Description

The objective of this benchmark is to highlight how Oracle BI EE can support pervasive deployments in large enterprises, using minimal hardware, by simulating an organization that needs to support more than 25,000 active concurrent users, each operating in mixed mode: ad-hoc reporting, application development, and report viewing.

The user population was divided into a mix of administrative users and business users. A maximum of 28,000 concurrent users were actively interacting and working in the system during the steady-state period. The tests executed 580 transactions per second, with think times of 60 seconds per user, between requests. In the test scenario 95% of the workload consisted of business users viewing reports and navigating within dashboards. The remaining 5% of the concurrent users, categorized as administrative users, were doing application development.

The benchmark scenario used a typical business user sequence of dashboard navigation, report viewing, and drill down. For example, a Service Manager logs into the system and navigates to his own set of dashboards viz. .Service Manager.. The user then selects the .Service Effectiveness. dashboard, which shows him four distinct reports, .Service Request Trend., .First Time Fix Rate., .Activity Problem Areas., and .Cost Per completed Service Call . 2002 till 2005. . The user then proceeds to view the .Customer Satisfaction. dashboard, which also contains a set of 4 related reports. He then proceeds to drill-down on some of the reports to see the detail data. Then the user proceeds to more dashboards, for example .Customer Satisfaction. and .Service Request Overview.. After navigating through these dashboards, he logs out of the application

This benchmark did not use a synthetic database schema. The benchmark tests were run on a full production version of the Oracle Business Intelligence Applications with a fully populated underlying database schema. The business processes in the test scenario closely represents a true customer scenario.

See Also

Disclosure Statement

Oracle BI EE benchmark results 10/13/2009, see

Tuesday Oct 13, 2009

The Sun SPARC Enterprise T5440 server with 1.6GHz UltraSPARC T2 Plus with Solaris Containers, Sun Flash Open Storage, and Sun JAVA System Web Server 7.0 Update 5 achieved World Record SPECweb2005.
  • Sun has obtained a World Record SPECweb2005 performance result of 100,209 SPECweb2005 on the Sun SPARC Enterprise T5440, running Solaris 10 10/09 Sun JAVA System Web Server 7.0 Update 5, and Java Hotspot™ Server VM.

  • This result demonstrates performance leadership of the Sun SPARC Enterprise T5440 server and its scalability, by using Solaris Containers to consolidate multiple web serving environments, and Sun OpenStorage Flash technology to store large datasets for fast data retrieval.

  • The Sun SPARC Enterprise T5440 delivers 21% greater SPECweb2005 performance than the HP DL370 G6 with 3.2GHz Xeon W5580 processors.

  • The Sun SPARC Enterprise T5440 delivers 40% greater SPECweb2005 performance than the HP DL 585 G5 with four 3.114 GHz Opteron 8393 SE processors.

  • The Sun SPARC Enterprise T5440 delivers 2x the SPECweb2005 performance of the HP DL 580 G5 with four 2.66GHz Xeon X7460 processors.

  • There are no IBM Power6 results on the SPECweb2005 benchmark.

  • This benchmark result clearly demonstrates that the Sun SPARC Enterprise T5440 running Solaris 10 10/09 and Sun Java System Webserver 7.0 Update 5 can support thousands of concurrent web server sessions and is an industry leader in web serving with a Sun solution.

Performance Landscape

Server

Processor

SPECweb2005

Banking*

Ecomm*

Support*

Webserver

OS

Sun T5440

4x 1.6 T2 Plus

100,209

176,500

133,000

95,000

Java WebServer

Solaris

HP DL370 G6

2x 3.2 W5580

83,073

117,120

142,080

76,352

Rock

RedHat
Linux

HP DL585 G5

4x 3.11 O8393

71,629

117,504

123,072

56,320

Rock

RedHat
Linux

HP DL580 G5

4x 2.66 X7460

50,013

97,632

69,600

40,800

Rock

RedHat
Linux

* Banking - SPECweb2005-Banking
Ecomm - SPECweb2005-Ecommerce
Support - SPECweb2005-Support

Results and Configuration Summary

Hardware Configuration:

1 Sun SPARC Enterprise T5440 with

  • 4 x UltraSPARC T2 Processor 8 core, 64 threads, 1.6 GHz
  • 254 GB memory
  • 6 x 4Gb PCI Express 8-Port Host Adapter (SG-XPCIE8SAS-E-Z)
  • 1 x Sun Storage F5100 Flash Array (TA5100RASA4-80AA)
  • 1 x Sun Storage F5100 Flash Array (TA5100RASA4-40AA)

Server Software Configuration:

  • Solaris 10 10/09
  • JAVA System Web Server 7.0 Update 5
  • Java Hotspot™ Server VM

Network configuration:

  • 1 x Arista DCS-7124s 24-10GbE port switch
  • 1 x Cisco 2970 series (WS-C2970G-24TS-E) switch for the three 1 GbE networks

Back-end Simulator:

1 Sun Fire X4270 with

  • 2 x 2.93 GHz Intel X5570 Quad core
  • 48GB memory
  • Solaris 10 10/09
  • JSWS 7.0 Update 5
  • Java Hotspot™ Server VM

Clients:

8 Sun Blade™ T6320

  • 1 x 1.417 GHz UltraSPARC-T2
  • 64 GB memory
  • Solaris 10 5/09
  • Java Hotspot™ Server VM

8 Sun Blade™ 6270

  • 2 x 2.93 GHz Intel X5570 Quad core
  • 36 GB memory
  • Solaris 10 5/09
  • Java Hotspot™ Server VM

Benchmark Description

SPECweb2005, successor to SPECweb99 and SPECweb99_SSL, is an industry standard benchmark for evaluating Web Server performance developed by SPEC. The benchmark simulates multiple user sessions accessing a Web Server and generating static and dynamic HTTP requests. The major features of SPECweb2005 are:

  • Measures simultaneous user sessions
  • Dynamic content: currently PHP and JSP implementations
  • Page images requested using 2 parallel HTTP connections
  • Multiple, standardized workloads: Banking (HTTPS), E-commerce (HTTP and HTTPS), and Support (HTTP)
  • Simulates browser caching effects
  • File accesses more accurately simulate today's disk access patterns

Key Points and Best Practices

  • The server was divided into four Solaris Containers and a single web server instance was executed in each container.
  • Four processor sets were created (with varying numbers of threads depending on the workload) to run the web server in. This was done to reduce memory access latency using the physical memory closest to the processor. All interrupts were run on the remaining threads.
  • Each web server is executed in the FX scheduling class to improve performance by reducing the frequency of context switches.
  • Two Sun Storage F5100 Flash Arrays (holding the target file set and logs) were shared by the four containers for fast data retrieval.
  • Use of Solaris Containers highlights the consolidation of multiple web serving environments on a single server.
  • Use of the Sun Ext I/O Expansion unit and Sun Storage F5100 Flash Arrays highlight the expandability of the server.

Disclosure Statement

Sun SPARC Enterprise T5440 (8 cores, 1 chip) 100209 SPECweb2005, was submitted to SPEC for review on October 13, 2009. HP ProLiant DL370 G6 (8 cores, 2 chips) 83,073 SPECweb2005. HP ProLiant DL585 G5 (16 cores, 4 chips) 71,629 SPECweb2005. HP ProLiant DL580 G5 (24 cores, 4 chips) 50,013 SPECweb2005. SPEC, SPECweb reg tm of Standard Performance Evaluation Corporation. Results from www.spec.org as of Oct 10, 2009.

Tuesday Oct 13, 2009

Two-tier SAP ERP 6.0 Enhancement Pack 4 (Unicode) Standard Sales and Distribution (SD) Benchmark Sun SPARC Enterprise M9000/32 SPARC64 VII

World Record on 32-processor using SAP ERP 6.0 Enhancement Pack 4 (Unicode) Standard Sales and Distribution (SD) Benchmark

  • The Sun SPARC Enterprise M9000 (32 processors, 128 cores, 256 threads) set a World Record on 32-processor using SAP Enhancement Package 4 for SAP ERP 6.0 (Unicode) Standard Sales and Distribution (SD) Benchmark, as Oct. 12th, 2009.

  • The 32-way Sun SPARC Enterprise M9000 with 2.88 GHz SPARC64 VII+ processors achieved 17,430 users on the two-tier SAP Sales and Distribution (SD) standard SAP enhancement package 4 for SAP ERP 6.0 (Unicode) application benchmark.

  • The Sun SPARC Enterprise M9000 result is 4.6x faster than the only IBM 5GHz Power6 unicode result, which was published on the IBM p550 using the new SAP Enhancement Package 4 for SAP ERP 6.0 (Unicode) Standard Sales and Distribution (SD) Benchmark.

  • IBM has not submitted any p595 results on the new SAP Enhancement Package 4 for SAP ERP 6.0 (Unicode) Standard Sales and Distribution (SD) Benchmark.

  • HP has not submitted any Itanium2 results on the new SAP Enhancement Package 4 for SAP ERP 6.0 (Unicode) Standard Sales and Distribution (SD) Benchmark.

  • In January 2009, a new version, the Two-tier SAP ERP 6.0 Enhancement Pack 4 (Unicode) Standard Sales and Distribution (SD) Benchmark, was released. This new release has higher cpu requirements and so yields from 25-50% fewer users compared to the previous Two-tier SAP ERP 6.0 (non-unicode) Standard Sales and Distribution (SD) Benchmark. 10-30% of this is due to the extra overhead from the processing of the larger character strings due to Unicode encoding. See this SAP Note for more details.

  • Unicode is a computing standard that allows for the representation and manipulation of text expressed in most of the world's writing systems. Before the Unicode requirement, this benchmark used ASCII characters meaning each was just 1 byte. The new version of the benchmark requires Unicode characters and the Application layer (where ~90% of the cycles in this benchmark are spent) uses a new encoding, UTF-16, which uses 2 bytes to encode most characters (including all ASCII characters) and 4 bytes for some others. This requires computers to do more computation and use more bandwidth and storage for most character strings. Refer to the above SAP Note for more details.

Performance Landscape

SAP-SD 2-Tier Performance Table (in decreasing performance order).

SAP ERP 6.0 Enhancement Pack 4 (Unicode) Results
(New version of the benchmark as of January 2009)

System OS
Database
Users SAP
ERP/ECC
Release
SAPS Date
Sun SPARC Enterprise M9000
32xSPARC 64 VII @2.88GHz
1024 GB
Solaris 10
Oracle10g
17,430 2009
6.0 EP4
(Unicode)
95,480 12-Oct-09
IBM System 550
4xPower6@5GHz
64 GB
AIX 6.1
DB2 9.5
3,752 2009
6.0 EP4
(Unicode)
20,520 16-Jun-09

Complete benchmark results may be found at the SAP benchmark website http://www.sap.com/benchmark.

Results and Configuration Summary

Certified Result:

    Number of SAP SD benchmark users:
    17,430
    Average dialog response time:
    0.95 seconds
    Throughput:

    Fully processed order line items/hour:
    1,909,670

    Dialog steps/hour:
    5,729,000

    SAPS:
    95,480
    SAP Certification:
    2009038

Hardware Configuration:

    Sun SPARC Enterprise M9000
      32 x 2.88GHz SPARC64 VII, 1024 GB memory
      6 x 6140 storage arrays

Software Configuration:

    Solaris 10
    SAP ECC Release: 6.0 Enhancement Pack 4 (Unicode)
    Oracle10g

Benchmark Description

The SAP Standard Application SD (Sales and Distribution) Benchmark is a two-tier ERP business test that is indicative of full business workloads of complete order processing and invoice processing, and demonstrates the ability to run both the application and database software on a single system. The SAP Standard Application SD Benchmark represents the critical tasks performed in real-world ERP business environments.

SAP is one of the premier world-wide ERP application providers, and maintains a suite of benchmark tests to demonstrate the performance of competitive systems on the various SAP products.

Disclosure Statement

Two-tier SAP Sales and Distribution (SD) standard SAP ERP 6.0 2005/EP4 (Unicode) application benchmarks as of 10/12/09: Sun SPARC Enterprise M9000 (32 processors, 128 cores, 256 threads) 17,430 SAP SD Users, 32 x 2.88 GHz SPARC VII, 1024 GB memory, Oracle10g, Solaris10, Cert# 2009038. IBM System 550 (4 processors, 8 cores, 16 threads) 3,752 SAP SD Users, 4x 5 GHz Power6, 64 GB memory, DB2 9.5, AIX 6.1, Cert# 2009023. Sun SPARC Enterprise M9000 (64 processors, 256 cores, 512 threads) 64 x 2.52 GHz SPARC64 VII, 1024GB memory, 39,100 SD benchmark users, 1.93 sec. avg. response time, Cert#2008042, Oracle 10g, Solaris 10, SAP ECC Release 6.0.

SAP, R/3, reg TM of SAP AG in Germany and other countries. More info www.sap.com/benchmark

Sunday Oct 11, 2009

TPC-C Sun SPARC Enterprise T5440 with Oracle RAC World Record Database Result

Sun and Oracle demonstrate the World's fastest database performance. Sun Microsystems using 12 Sun SPARC Enterprise T5440 servers, 60 Sun Storage F5100 Flash arrays and Oracle 11g Enterprise Edition with Real Application Clusters and Partitioning delivered a world-record TPC-C benchmark result.

  • The 12-node Sun SPARC Enterprise T5440 server cluster result delivered a world record TPC-C benchmark result of 7,646,486.7 tpmC and $2.36 $/tpmC (USD) using Oracle 11g R1 on a configuration available 12/14/09.

  • The 12-node Sun SPARC Enterprise T5440 server cluster beats the performance of the IBM Power 595 (5GHz) with IBM DB2 9.5 database by 26% and has 16% better price/performance on the TPC-C benchmark.

  • The complete Oracle/Sun solution used 10.7x better computational density than the IBM configuration (computational density = performance/rack).

  • The complete Oracle/Sun solution used 8 times fewer racks than the IBM configuration.

  • The complete Oracle/Sun solution has 5.9x better power/performance than the IBM configuration.

  • The 12-node Sun SPARC Enterprise T5440 server cluster beats the performance of the HP Superdome (1.6GHz Itanium2) by 87% and has 19% better price/performance on the TPC-C benchmark.

  • The Oracle/Sun solution utilized Sun FlashFire technology to deliver this result. The Sun Storage F5100 flash array was used for database storage.

  • Oracle 11g Enterprise Edition with Real Application Clusters and Partitioning scales and effectively uses all of the nodes in this configuration to produce the world record performance.

  • This result showed Sun and Oracle's integrated hardware and software stacks provide industry-leading performance.

More information on this benchmark will be posted in the next several days.

Performance Landscape

TPC-C results (sorted by tpmC, bigger is better)


System
tpmC Price/tpmC Avail Database Cluster Racks w/KtpmC
12 x Sun SPARC Enterprise T5440 7,646,487 2.36 USD 12/14/09 Oracle 11g RAC Y 9 9.6
IBM Power 595 6,085,166 2.81 USD 12/10/08 IBM DB2 9.5 N 76 56.4
Bull Escala PL6460R 6,085,166 2.81 USD 12/15/08 IBM DB2 9.5 N 71 56.4
HP Integrity Superdome 4,092,799 2.93 USD 08/06/07 Oracle 10g R2 N 46 to be added

Avail - Availability date
w/KtmpC - Watts per 1000 tpmC
Racks - clients, servers, storage, infrastructure

Results and Configuration Summary

Hardware Configuration:

    9 racks used to hold

    Servers:
      12 x Sun SPARC Enterprise T5440
      4 x 1.6 GHz UltraSPARC T2 Plus
      512 GB memory
      10 GbE network for cluster
    Storage:
      60 x Sun Storage F5100 Flash Array
      61 x Sun Fire X4275, Comstar SAS target emulation
      24 x Sun StorageTek 6140 (16 x 300 GB SAS 15K RPM)
      6 x Sun Storage J4400
      3 x 80-port Brocade FC switches
    Clients:
      24 x Sun Fire X4170, each with
      2 x 2.53 GHz X5540
      48 GB memory

Software Configuration:

    Solaris 10 10/09
    OpenSolaris 6/09 (COMSTAR) for Sun Fire X4275
    Oracle 11g Enterprise Edition with Real Application Clusters and Partitioning
    Tuxedo CFS-R Tier 1
    Sun Web Server 7.0 Update 5

Benchmark Description

TPC-C is an OLTP system benchmark. It simulates a complete environment where a population of terminal operators executes transactions against a database. The benchmark is centered around the principal activities (transactions) of an order-entry environment. These transactions include entering and delivering orders, recording payments, checking the status of orders, and monitoring the level of stock at the warehouses.

POSTSCRIPT: Here are some comments on IBM's grasping-at-straws-perf/core attacks on the TPC-C result:
c0t0d0s0 blog: "IBM's Reaction to Sun&Oracle TPC-C

See Also

Disclosure Statement

TPC Benchmark C, tpmC, and TPC-C are trademarks of the Transaction Performance Processing Council (TPC). 12-node Sun SPARC Enterprise T5440 Cluster (1.6GHz UltraSPARC T2 Plus, 4 processor) with Oracle 11g Enterprise Edition with Real Application Clusters and Partitioning, 7,646,486.7 tpmC, $2.36/tpmC. Available 12/14/09. IBM Power 595 (5GHz Power6, 32 chips, 64 cores, 128 threads) with IBM DB2 9.5, 6,085,166 tpmC, $2.81/tpmC, available 12/10/08. HP Integrity Superdome(1.6GHz Itanium2, 64 processors, 128 cores, 256 threads) with Oracle 10g Enterprise Edition, 4,092,799 tpmC, $2.93/tpmC. Available 8/06/07. Source: www.tpc.org, results as of 10/11/09.

Friday Aug 28, 2009

Sun Fire X4270 Server World Record Two Processor performance result on Two-tier SAP ERP 6.0 Enhancement Pack 4 (Unicode) Standard Sales and Distribution (SD) Benchmark

  • World Record 2-processor performance result on the two-tier SAP ERP 6.0 enhancement pack 4 (unicode) standard sales and distribution (SD) benchmark on the Sun Fire X4270 server.

  • The Sun Fire X4270 server with two Intel Xeon X5570 processors (8 cores, 16 threads) achieved 3,800 SAP SD Benchmark users running SAP ERP application release 6.0 enhancement pack 4 benchmark with unicode software, using Oracle 10g database and Solaris 10 operating system.

  • This benchmark result highlights the optimal performance of SAP ERP on Sun Fire servers running the Solaris OS and the seamless multilingual support available for systems running SAP applications.

  • The Sun Fire X4270 server using 2 Intel Xeon X5570 processors, 48 GB memory and the Solaris 10 operating system beat the IBM System 550 server using 4 POWER6 processors, 64 GB memory and the AIX 6.1 operating system.
  • The Sun Fire X4270 server using 2 Intel Xeon X5570 processors, 48 GB memory and the Solaris 10 operating system beat the HP ProLiant BL460c G6 server using 2 Intel Xeon X5570 processors, 48 GB memory and the Windows Server 2008 operating system.

  • In January 2009, a new version, the Two-tier SAP ERP 6.0 Enhancement Pack 4 (Unicode) Standard Sales and Distribution (SD) Benchmark, was released. This new release has higher cpu requirements and so yields from 25-50% fewer users compared to the previous Two-tier SAP ERP 6.0 (non-unicode) Standard Sales and Distribution (SD) Benchmark. 10-30% of this is due to the extra overhead from the processing of the larger character strings due to Unicode encoding. Refer to SAP Note for more details. Note: username and password for SAP Service Marketplace required.

  • Unicode is a computing standard that allows for the representation and manipulation of text expressed in most of the world's writing systems. Before the Unicode requirement, this benchmark used ASCII characters meaning each was just 1 byte. The new version of the benchmark requires Unicode characters and the Application layer (where ~90% of the cycles in this benchmark are spent) uses a new encoding, UTF-16, which uses 2 bytes to encode most characters (including all ASCII characters) and 4 bytes for some others. This requires computers to do more computation and use more bandwidth and storage for most character strings. Refer to SAP Note for more details. Note: username and password for SAP Service Marketplace required.

Performance Landscape

SAP-SD 2-Tier Performance Table (in decreasing performance order).

SAP ERP 6.0 Enhancement Pack 4 (Unicode) Results
(New version of the benchmark as of January 2009)

System OS
Database
Users SAP
ERP/ECC
Release
SAPS SAPS/
Proc
Date
Sun Fire X4270
2xIntel Xeon X5570 @2.93GHz
48 GB
Solaris 10
Oracle 10g
3,800 2009
6.0 EP4
(Unicode)
21,000 10,500 21-Aug-09
IBM System 550
4xPower6 @5GHz
64 GB
AIX 6.1
DB2 9.5
3,752 2009
6.0 EP4
(Unicode)
20,520 5,130 16-Jun-09
Sun Fire X4270
2xIntel Xeon X5570 @2.93GHz
48 GB
Solaris 10
Oracle 10g
3,700 2009
6.0 EP4
(Unicode)
20,300 10,150 30-Mar-09
HP ProLiant BL460c G6
2xIntel Xeon X5570 @2.93GHz
48 GB
Windows Server 2008
Enterprise Edition
SQL Server 2008
3,415 2009
6.0 EP4
(Unicode)
18,670 9,335 04-Aug-09
Fujitsu PRIMERGY TX/RX 300 S5
2xIntel Xeon X5570 @2.93GHz
48 GB
Windows Server 2008
Enterprise Edition
SQL Server 2008
3,328 2009
6.0 EP4
(Unicode)
18,170 9,085 13-May-09
HP ProLiant BL460c G6
2xIntel Xeon X5570 @2.93GHz
48 GB
Windows Server 2008
Enterprise Edition
SQL Server 2008
3,310 2009
6.0 EP4
(Unicode)
18,070 9,035 27-Mar-09
HP ProLiant DL380 G6
2xIntel Xeon X5570 @2.93GHz
48 GB
Windows Server 2008
Enterprise Edition
SQL Server 2008
3,300 2009
6.0 EP4
(Unicode)
18,030 9,015 27-Mar-09
Fujitsu PRIMERGY BX920 S1
2xIntel Xeon X5570 @2.93GHz
48 GB
Windows Server 2008
Enterprise Edition
SQL Server 2008
3,260 2009
6.0 EP4
(Unicode)
17,800 8,900 18-Jun-09
NEC Express5800
2xIntel Xeon X5570 @2.93GHz
48 GB
Windows Server 2008
Enterprise Edition
SQL Server 2008
3,250 2009
6.0 EP4
(Unicode)
17,750 8,875 28-Jul-09
HP ProLiant DL380 G6
2xIntel Xeon X5570 @2.93GHz
48 GB
SuSE Linux Enterprise Server 10
MaxDB 7.8
3,171 2009
6.0 EP4
(Unicode)
17,380 8,690 17-Apr-09

Complete benchmark results may be found at the SAP benchmark website: http://www.sap.com/benchmark.

Results and Configuration Summary

Hardware Configuration:

    One, Sun Fire X4270
      2 x 2.93 GHz Intel Xeon X5570 processors (2 processors / 8 cores / 16 threads)
      48 GB memory
      Sun Storage 6780 with 48 x 73GB 15KRPM 4Gb FC-AL and 16 x 146GB 15KRPM 4Gb FC-AL Drives

Software Configuration:

    Solaris 10
    SAP ECC Release: 6.0 Enhancement Pack 4 (Unicode)
    Oracle 10g

Certified Results:

Performance: 3800 benchmark users
SAP Certification: 2009033

Benchmark Description

The SAP Standard Application SD (Sales and Distribution) Benchmark is a two-tier ERP business test that is indicative of full business workloads of complete order processing and invoice processing, and demonstrates the ability to run both the application and database software on a single system. The SAP Standard Application SD Benchmark represents the critical tasks performed in real-world ERP business environments.

SAP is one of the premier world-wide ERP application providers, and maintains a suite of benchmark tests to demonstrate the performance of competitive systems on the various SAP products.

Key Points and Best Practices

  • Set up the storage (LSI-OEM) to deliver the needed raw devices directly out of the storage and do not use any software layer in between.

See Also

Benchmark Tags

World-Record, Performance, SAP-SD, Solaris, Oracle, Intel, X64, x86, HP, IBM, Application, Database

Disclosure Statement

    Two-tier SAP Sales and Distribution (SD) standard SAP SD benchmark based on SAP enhancement package 4 for SAP ERP 6.0 (Unicode) application benchmark as of 08/21/09: Sun Fire X4270 (2 processors, 8 cores, 16 threads) 3,800 SAP SD Users, 2x 2.93 GHz Intel Xeon x5570, 48 GB memory, Oracle 10g, Solaris 10, Cert# 2009033. IBM System 550 (4 processors, 8 cores, 16 threads) 3,752 SAP SD Users, 4x 5 GHz Power6, 64 GB memory, DB2 9.5, AIX 6.1, Cert# 2009023. Sun Fire X4270 (2 processors, 8 cores, 16 threads) 3,700 SAP SD Users, 2x 2.93 GHz Intel Xeon x5570, 48 GB memory, Oracle 10g, Solaris 10, Cert# 2009005. HP ProLiant BL460c G6 (2 processors, 8 cores, 16 threads) 3,415 SAP SD Users, 2x 2.93 GHz Intel Xeon x5570, 48 GB memory, SQL Server 2008, Windows Server 2008 Enterprise Edition, Cert# 2009031. Fujitsu PRIMERGY TX/RX 300 S5 (2 processors, 8 cores, 16 threads) 3,328 SAP SD Users, 2x 2.93 GHz Intel Xeon x5570, 48 GB memory, SQL Server 2008, Windows Server 2008 Enterprise Edition, Cert# 2009014. HP ProLiant BL460c G6 (2 processors, 8 cores, 16 threads) 3,310 SAP SD Users, 2x 2.93 GHz Intel Xeon x5570, 48 GB memory, SQL Server 2008, Windows Server 2008 Enterprise Edition, Cert# 2009003. HP ProLiant DL380 G6 (2 processors, 8 cores, 16 threads) 3,300 SAP SD Users, 2x 2.93 GHz Intel Xeon x5570, 48 GB memory, SQL Server 2008, Windows Server 2008 Enterprise Edition, Cert# 2009004. Fujitsu PRIMERGY BX920 S1 (2 processors, 8 cores, 16 threads) 3,260 SAP SD Users, 2x 2.93 GHz Intel Xeon x5570, 48 GB memory, SQL Server 2008, Windows Server 2008 Enterprise Edition, Cert# 2009024. NEC Express5800 (2 processors, 8 cores, 16 threads) 3,250 SAP SD Users, 2x 2.93 GHz Intel Xeon x5570, 48 GB memory, SQL Server 2008, Windows Server 2008 Enterprise Edition, Cert# 2009027. HP ProLiant DL380 G6 (2 processors, 8 cores, 16 threads) 3,171 SAP SD Users, 2x 2.93 GHz Intel Xeon x5570, 48 GB memory, MaxDB 7.8, SuSE Linux Enterprise Server 10, Cert# 2009006. IBM System x3650 M2 (2 Processors, 8 Cores, 16 Threads) 5,100 SAP SD users,2x 2.93 Ghz Intel Xeon X5570, DB2 9.5, Windows Server 2003 Enterprise Edition, Cert# 2008079. HP ProLiant DL380 G6 (2 processors, 8 cores, 16 threads) 4,995 SAP SD Users, 2x 2.93 GHz Intel Xeon x5570, 48 GB memory, SQL Server 2005, Windows Server 2003 Enterprise Edition, Cert# 2008071.

    SAP, R/3, reg TM of SAP AG in Germany and other countries. More info: www.sap.com/benchmark

Wednesday Aug 12, 2009

Significance of Results

The Sun SPARC Enterprise T5240 server running the Sun Java Messaging server 6.3 achieved World Record SPECmail2009 results using ZFS.

  • A Sun SPARC Enterprise T5240 server powered by two 1.6 GHz UltraSPARC T2 Plus processors running the Sun Java Communications Suite 5 software along with the Solaris 10 Operating System and using six Sun StorageTek 2540 arrays achieved a new World Record 12000 SPECmail_Ent2009 IMAP4 users at 57,758 Sessions/hour for SPECmail2009.
  • The Sun SPARC Enterprise T5240 server achieve twice the number of users and sessions/hour rate than the Apple Xserv3,1 solution equipped with Intel Nehalem processors.
  • The Sun result was obtained using ~10% fewer disk spindles with the Sun StorageTek 2540 RAID controller direct attach storage solution versus Apple's direct attached storage.
  • This benchmark result demonstrates that the Sun SPARC Enterprise T5240 server together with Sun Java Communication Suite 5 component Sun Java System Messaging Server 6.3, Solaris 10 and ZFS on Sun StorageTek 2540 arrays supports a large, enterprise level IMAP mail server environment. This solution is reliable, low cost, and low power, delivering the best performance and maximizing the data integrity with Sun's ZFS file systems.

Performance Landscape

SPECmail2009 (ordered by performance)

System Processors Performance
Type GHz Ch, Co, Th SPECmail_Ent2009
Users
SPECmail2009
Sessions/hour
Sun SPARC Enterprise T5240 UltraSPARC T2 Plus 1.6 2, 16, 128 12,000 57,758
Sun Fire X4275 Xeon X5570 2.93 2, 8, 16 8,000 38,348
Apple Xserv3,1 Xeon X5570 2.93 2, 8, 16 6,000 28,887
Sun SPARC Enterprise T5220 UltraSPARC T2 1.4 1, 8, 64 3,600 17,316

Notes:

    Number of SPECmail_Ent2009 users (bigger is better)
    SPECmail2009 Sessions/hour (bigger is better)
    Ch, Co, Th: Chips, Cores, Threads

Complete benchmark results may be found at the SPEC benchmark website http://www.spec.org

Results and Configuration Summary

Hardware Configuration:

    Sun SPARC Enterprise T5240

      2 x 1.6 GHz UltraSPARC T2 Plus processors
      128 GB
      8 x 146GB, 10K RPM SAS disks

    6 x Sun StorageTek 2540 Arrays,

      4 arrays with 12 x 146GB 15K RPM SAS disks
      2 arrays with 12 x 73GB 15K RPM SAS disks

    2 x Sun Fire X4600 benchmark manager, load generator and mail sink

      8 x AMD Opteron 8356 2.7 GHz QC processors
      64 GB
      2 x 73GB 10K RPM SAS disks

    Sun Fire X4240 load generator

      2 x AMD Opteron 2384 2.7 GHz DC processors
      16 GB
      2 x 73GB 10K RPM SAS disks

Software Configuration:

    Solaris 10
    ZFS
    Sun Java Communication Suite 5
    Sun Java System Messaging Server 6.3

Benchmark Description

The SPECmail2009 benchmark measures the ability of corporate e-mail systems to meet today's demanding e-mail users over fast corporate local area networks (LAN). The SPECmail2009 benchmark simulates corporate mail server workloads that range from 250 to 10,000 or more users, using industry standard SMTP and IMAP4 protocols. This e-mail server benchmark creates client workloads based on a 40,000 user corporation, and uses folder and message MIME structures that include both traditional office documents and a variety of rich media content. The benchmark also adds support for encrypted network connections using industry standard SSL v3.0 and TLS 1.0 technology. SPECmail2009 replaces all versions of SPECmail2008, first released in August 2008. The results from the two benchmarks are not comparable.

Software on one or more client machines generates a benchmark load for a System Under Test (SUT) and measures the SUT response times. A SUT can be a mail server running on a single system or a cluster of systems.

A SPECmail2009 'run' simulates a 100% load level associated with the specific number of users, as defined in the configuration file. The mail server must maintain a specific Quality of Service (QoS) at the 100% load level to produce a valid benchmark result. If the mail server does maintain the specified QoS at the 100% load level, the performance of the mail server is reported as SPECmail_Ent2009 SMTP and IMAP Users at SPECmail2009 Sessions per hour. The SPECmail_Ent2009 users at SPECmail2009 Sessions per Hour metric reflects the unique workload combination for a SPEC IMAP4 user.

Key Points and Best Practices

  • Each Sun StorageTek 2540 array was configured with 6 hardware RAID1 volumes. A total of 36 RAID1 volumes were configured with 24 of size 146GB and 12 of size 73GB. Four ZPOOLs of (6x146GB RAID1 volumes) were mounted as the four primary message stores and ZFS file systems. Four ZPOOLs of (8x73GB RAID1 volumes) were mounted as the four primary message indexes. The hardware RAID1 volumes were created with 64K stripe size without read ahead turned on. The 7x146GB internal drives were used to create four ZPOOLs and ZFS file systems for the LDAP, store metadata, queue and the mailserver log.

  • The clients used these Java options: java -d64 -Xms4096m -Xmx4096m -XX:+AggressiveHeap

  • See the SPEC Report for all OS, network and messaging server tunings.

See Also

Disclosure Statement

SPEC, SPECmail reg tm of Standard Performance Evaluation Corporation. Results as of 08/07/2009 on www.spec.org. SPECmail2009: Sun SPARC Enterprise T5240 (16 cores, 2 chips) SPECmail_Ent2009 12000 users at 57,758 SPECmail2009 Sessions/hour. Apple Xserv3,1 (8 cores, 2 chips) SPECmail_Ent2009 6000 users at 28,887 SPECmail2009 Sessions/hour.

Wednesday Jul 22, 2009

Sun has upgraded the UltraSPARC T2 and UltraSPARC T2 Plus processors to 1.6 GHz. As described in some detail in yesterday's post, new results show SPEC CPU2006 performance improvements vs. previous systems that often exceed the clock speed improvement. The scaling can be attributed to both memory system improvements and software improvements, such as the Sun Studio 12 Update 1 compiler.

A MHz improvement within a product line is often useful. If yesterday's chip runs at speed n and today's at n*1.12 then, intuitively, sure, I'll take today's.

Comparing MHz across product lines is often counter-intuitive. Consider that Sun's new systems provide:

  • up to 68% more throughput than the 4.7 GHz POWER6+ [1], and
  • up to 3x the throughput of the Itanium 9150N [2].

The comparisons are particularly striking when one takes into account the cache size advantage for both the POWER6+ and the Itanium 9150N, and the MHz advantage for the POWER6+:

Processor GHz Number of
hw cache levels
Size of
last cache
(per chip)
SPECint_rate_base2006
UltraSPARC T2
UltraSPARC T2 Plus
1.6 2 4 MB 1 chip: 89
2 chips: 171
4 chips: 338
POWER6+ 4.7 3 32 MB Best 2 chip result: 102. UltraSPARC T2 Plus delivers 68% more integer throughput [1]
Itanium 9150N 1.6 3 24 MB Best 4 chip result: 114. UltraSPARC T2 Plus delivers 3x the integer throughput. [2]

These are per-chip results, not per-core or per-thread. Sun's CMT processors are designed for overall system throughput: how much work can the overall system get done.

A mystery: With comparatively smaller caches and modest clock rates, why do the Sun CMT processors win?

The performance hole: Memory latency. From the point of view of a CPU chip, the big performance problem is that memory latency is inordinately long compared to chip cycle times.

A hardware designer can attempt to cover up that latency with very large caches, as in the POWER6+ and Itanium, and this works well when running a small number of modest-sized applications. Large caches become less helpful, though, as workloads become more complex.

MHz isn't everything. In fact, MHz hardly counts at all when the problem is memory latency. Suppose the hot part of an application looks like this:

  loop:
computational instruction
computational instruction
computational instruction
memory access instruction
branch to loop

For an application that looks like this, the computational instructions may complete in only a few cycles, while the memory access instruction may easily require on the order of 100ns - which, for a 1 GHz chip, is on the order of 100 cycles. If the processor speed is increased by a factor of 4, but memory speed is not, then memory is still 100ns away, and when measured in cycles, it is now 400 cycles distant. The overall loop hardly speeds up at all.

Lest the reader think I am making this up - consider page 8 of this IBM talk from April, 2008 regarding the POWER6:

latencies

The IBM POWER systems have some impressive performance characteristics - if your application is tiny enough to fit in its first or second level cache. But memory latency is not impressive. If your workload requires multiple concurrent threads accessing a large memory space, Sun's CMT approach just might be a better fit.

Operating System Overhead A context switch from one process to another is mediated by operating system services. The OS parks context from the process that is currently running - typically saving dozens of program registers and other context (such as virtual address space information); decides which process to run next (which may require access to several OS data structures); and loads the context for the new process (registers, virtual address context, etc.). If the system is running many processes, then caches are unlikely to be helpful during this context switch, and thousands of cycles may be spent on main memory accesses.

Design for throughput: Sun's CMT approach handles the complexity of real-world applications by allowing up to 64 processes to be simultaneously on-chip. When a long-latency stall occurs, such as an access to main memory, the chip switches to executing instructions on behalf of other, non-stalled threads, thus improving overall system throughput. No operating system intervention is required as resources are shared among the processes on the chip.

[1] http://www.spec.org/cpu2006/results/res2009q2/cpu2006-20090427-07263.html
[2] http://www.spec.org/cpu2006/results/res2009q2/cpu2006-20090522-07485.html

Competitive results retrieved from www.spec.org 20 July 2009. Sun's CMT results have been submitted to SPEC. SPEC, SPECfp, SPECint are registered trademarks of the Standard Performance Evaluation Corporation.

Tuesday Jul 21, 2009

Significance of Results

The Sun SPARC Enterprise T5240 server equipped with two UltraSPARC T2 processors running at 1.6 GHz delivered World Record ZXTM HTTPThroughput results.

  • Sun SPARC Enterprise T5240 (2 UltraSPARC T2 Plus 1.6GHz) delivers an HTTPThroughput of 13.4 Gbit/sec and a price-performance of 5.5K $/Gb/sec which is 34% better performance and 2.6x the price-performance than a f5 BIG-IP VIPRON (Chassis + 1 blade).
  • Sun SPARC Enterprise T5240 (2 UltraSPARC T2 Plus 1.6GHz) delivers an HTTPThroughput of 13.4 Gbit/sec and a price-performance of 5.5K $/Gb/sec which is 91% better performance and 2.7x the price-performance than a f5 BIG-IP 8800.
  • Sun SPARC Enterprise T5240 (2 UltraSPARC T2 Plus 1.6GHz) delivers an HTTPThroughput of 13.4 Gbit/sec and a price-performance of 5.5K $/Gb/sec which is 3.3x the price-performance than a Citrix 12000.
  • Sun's UltraSPARC T2+ processor includes support for common bulk ciphers, secure hash operations and both prime and binary field Elliptic Cryptography. The UltraSPARC T2 processor supports RC4, DES, 3DES, AES-128, AES-192, AES-256, MD5, SHA-1, SHA-256.

Performance Landscape

Zeus ZXTM HTTPThroughput Chart (ordered by performance)

System
Gb/sec

$

(HW+SW)

$/perf

($/Gb/sec)

Sun SPARC Enterprise T5240 (2x 1.6GHz US T2 Plus) 13.4
$74K 5.5K
f5 BIG-IP VIPRION 10.0 $141K 14.1K
Sun SPARC Enterprise T5140 (2x 1.2GHz US T2 Plus) 9.1
$55K
6.1K
f5 BIG-IP 8800 7.0
$105K
15.1K
f5 BIG-IP 6900 6.0
$71K
11.8K
Citrix 12000
6.0
$110K
18.3K
Sun SPARC Enterprise T5120 (1x 1.2GHz US T2) 5.9
$46K
7.8K
Citrix 10010 4.8
$85K 17.7K

Performance graph of f5, Citrix and previous Sun results at: http://www.zeus.com/news/press_articles/zeus-price-performance-press-release.html?gclid=CLn4jLuuk5cCFQsQagod7gTkJA.

Results and Configuration Summary

Hardware Configuration:
    Sun SPARC Enterprise T5240 with
    • 2x 1.6GHz UltraSPARC T2 Plus
    • 16 GB memory
    • 1 internal 146GB 10K SAS drive
    • 2x Sun 10GbE Xaui Card - (SESX7XA1Z)
    • 2 x Dual 10GbE SFP+ PCIe ( X1109a-z ) with 1 X1109a-z per card

Software Configuration:

    Solaris
    Zeus ZXTM version 5.1r1

Benchmark Description

The benchmark tests HTTP Throughput for Persistent HTTP connections. Large files bandwidth (Gbit/s) is measured by fetching large files. Load is applied by using ZeusBench, a benchmarking tool in ZXTM 5.1r1, and is used for Zeus internal performance testing and as a load generation tool. Multiple clients request 100MB files over http via the ZXTM load balancer.

See Also

Performance on the Zeus Website

Disclosure Statement

Zeus is TM of Zeus Technology Limited. Results as of 7/21/2009 on http://www.zeus.com/news/press_articles/zeus-price-performance-press-release.html?gclid=CLn4jLuuk5cCFQsQagod7gTkJA.

Tuesday Jul 21, 2009

Oracle BI EE Sun SPARC Enterprise T5440 World Record Performance

The Sun SPARC Enterprise T5440 server running the new 1.6 GHz UltraSPARC T2 Plus processor delivered world record performance on Oracle Business Intelligence Enterprise Edition (BI EE) tests using Sun's ZFS.
  • The Sun SPARC Enterprise T5440 server with four 1.6 GHz UltraSPARC T2 Plus processors delivered the best single system performance of 28K concurrent users on the Oracle BI EE benchmark. This result used Solaris 10 with Solaris Containers and the Oracle 11g Database software.

  • The benchmark demonstrates the scalability of Oracle Business Intelligence Cluster with 4 nodes running in Solaris Containers within single Sun SPARC Enterprise T5440 server.

  • The Sun SPARC Enterprise Server T5440 server with internal SSD and the ZFS file system showed significant I/O performance improvement over traditional disk for Business Intelligence Web Catalog activity.

Performance Landscape

System Processors Users
Chips Cores Threads GHz Type
1 x Sun SPARC Enterprise T5440 4 32 256 1.6 UltraSPARC T2 Plus 28,000
5 x Sun Fire T2000 1 8 32 1.2 UltraSPARC T1 10,000

Results and Configuration Summary

Hardware Configuration:

    Sun SPARC Enterprise T5440
      4 x 1.6 GHz UltraSPARC T2 Plus processors
      256 GB
      STK2540 (6 x 146GB)

Software Configuration:

    Solaris 10 5/09
    Oracle BIEE 10.1.3.4 64-bit
    Oracle 11g R1 Database

Benchmark Description

The objective of this benchmark is to highlight how Oracle BI EE can support pervasive deployments in large enterprises, using minimal hardware, by simulating an organization that needs to support more than 25,000 active concurrent users, each operating in mixed mode: ad-hoc reporting, application development, and report viewing.

The user population was divided into a mix of administrative users and business users. A maximum of 28,000 concurrent users were actively interacting and working in the system during the steady-state period. The tests executed 580 transactions per second, with think times of 60 seconds per user, between requests. In the test scenario 95% of the workload consisted of business users viewing reports and navigating within dashboards. The remaining 5% of the concurrent users, categorized as administrative users, were doing application development.

The benchmark scenario used a typical business user sequence of dashboard navigation, report viewing, and drill down. For example, a Service Manager logs into the system and navigates to his own set of dashboards viz. .Service Manager.. The user then selects the .Service Effectiveness. dashboard, which shows him four distinct reports, .Service Request Trend., .First Time Fix Rate., .Activity Problem Areas., and .Cost Per completed Service Call . 2002 till 2005. . The user then proceeds to view the .Customer Satisfaction. dashboard, which also contains a set of 4 related reports. He then proceeds to drill-down on some of the reports to see the detail data. Then the user proceeds to more dashboards, for example .Customer Satisfaction. and .Service Request Overview.. After navigating through these dashboards, he logs out of the application

This benchmark did not use a synthetic database schema. The benchmark tests were run on a full production version of the Oracle Business Intelligence Applications with a fully populated underlying database schema. The business processes in the test scenario closely represents a true customer scenario.

Key Points and Best Practices

Since the server has 32 cores, we created 4 x Solaris Containers with 8 cores dedicated to each of the containers. And a total of four instances of BI server + Presentation server (collectively referred as an 'instance' here onwards) were installed at one instance per container. All the four BI instances were clustered using the BI Cluster software components.

The ZFS file system was used to overcome the 'Too many links' error when there are ~28,000 concurrent users. Earlier the file system has reached UFS limitation of 32767 sub-directories (LINK_MAX) with ~28K users online -- and there are thousands of errors due to the inability to create new directories beyond 32767 directories within a directory. Web Catalog stores the user profile on the disk by creating at least one dedicated directory for each user. If there are more than 25,000 concurrent users, clearly ZFS is the way to go.

See Also:

Oracle Business Intelligence Website, BUSINESS INTELLIGENCE has other results

Disclosure Statement

Oracle Business Intelligence Enterprise Edition benchmark, see http://www.oracle.com/solutions/business_intelligence/resource-library-whitepapers.html for more. Results as of 7/20/09.

Tuesday Jul 21, 2009

Sun SPARC Enterprise T5440 Server World Record Four Processor performance result on Two-tier SAP ERP 6.0 Enhancement Pack 4 (Unicode) Standard Sales and Distribution (SD) Benchmark

  • World Record performance result with four processors on the two-tier SAP ERP 6.0 enhancement pack 4 (unicode) standard sales and distribution (SD) benchmark as of July 21, 2009.
  • The Sun SPARC Enterprise T5440 Server with four 1.6GHz UltraSPARC-T2 Plus processors (32 cores, 256 threads)achieved 4,720 SAP SD Benchmark users running SAP ERP application release 6.0 enhancement pack 4 benchmark with unicode software, using Oracle10g database and Solaris 10 OS.
  • Sun SPARC Enterprise T5440 Server with four 1.6GHz UltraSPARC T2 Plus processors beats IBM System 550 by 26% using Oracle10g and Solaris 10 even though they both use the same number of processors.
  • Sun SPARC Enterprise T5440 Server with four 1.6GHz UltraSPARC T2 Plus processors beats HP ProLiant DL585 G6 using Oracle10g and Solaris 10 with the same number of processors.
  • This benchmark result highlights the optimal performance of SAP ERP on Sun SPARC Enterprise servers running the Solaris OS and the seamless multilingual support available for systems running SAP applications.
  • In January 2009, a new version, the Two-tier SAP ERP 6.0 Enhancement Pack 4 (Unicode) Standard Sales and Distribution (SD) Benchmark, was released. This new release has higher cpu requirements and so yields from 25-50% fewer users compared to the previous Two-tier SAP ERP 6.0 (non-unicode) Standard Sales and Distribution (SD) Benchmark. 10-30% of this is due to the extra overhead from the processing of the larger character strings due to Unicode encoding. Refer to SAP Note for more details (https://service.sap.com/sap/support/notes/1139642 Note: User and password for SAP Service Marketplace required).
  • Unicode is a computing standard that allows for the representation and manipulation of text expressed in most of the world's writing systems. Before the Unicode requirement, this benchmark used ASCII characters meaning each was just 1 byte. The new version of the benchmark requires Unicode characters and the Application layer (where ~90% of the cycles in this benchmark are spent) uses a new encoding, UTF-16, which uses 2 bytes to encode most characters (including all ASCII characters) and 4 bytes for some others. This requires computers to do more computation and use more bandwidth and storage for most character strings. Refer to SAP Note for more details (https://service.sap.com/sap/support/notes/1139642 Note: User and password for SAP Service Marketplace required).

Performance Landscape

SAP-SD 2-Tier Performance Table (in decreasing performance order).

SAP ERP 6.0 Enhancement Pack 4 (Unicode) Results
(New version of the benchmark as of January 2009)

System OS
Database
Users SAP
ERP/ECC
Release
SAPS SAPS/
Proc
Date
Sun SPARC Enterprise T5440 Server
4xUltraSPARC T2 Plus@1.6GHz
256 GB
Solaris 10
Oracle10g
4,720 2009
6.0 EP4
(Unicode)
25,830 6,458 21-Jul-09
HP ProLiant DL585 G6
4xAMD Opteron 8439 SE @2.8Hz
64 GB
Windows Server 2008
Enterprise Edition
SQL Server 2008
4,665 2009
6.0 EP4
(Unicode)
25,530 6,383 10-Jul-09
HP ProLiant BL685c G6
4xAMD Opteron Processor 8435 @2.6GHz
64 GB
Windows Server 2008
Enterprise Edition
SQL Server 2008
4,422 2009
6.0 EP4
(Unicode)
24,230 6,058 29-May-09
IBM System 550
4xPower6@5GHz
64 GB
AIX 6.1
DB2 9.5
3,752 2009
6.0 EP4
(Unicode)
20,520 5,130 16-Jun-09
HP ProLiant DL585 G5
4xAMD Opteron Processor 8393 SE@3.1GHz
64 GB
Windows Server 2008
Enterprise Edition
SQL Server 2008
3,430 2009
6.0 EP4
(Unicode)
18,730 4,683 24-Apr-09
HP ProLiant BL685 G6
4xAMD Opteron Processor 8389 @2.9GHz
64 GB
Windows Server 2008
Enterprise Edition
SQL Server 2008
3,118 2009
6.0 EP4
(Unicode)
17,050 4,263 24-Apr-09
NEC Express5800
4xIntel Xeon Processor X7460@2.66GHz
64 GB
Windows Server 2008 Enterprise Edition
SQL Server 2008
2,957 2009
6.0 EP4
(Unicode)
16,170 4,043 28-May-09
Dell PowerEdge M905
4xAMD Opteron Processor 8384@2.7GHz
96 GB
Windows Server 2003 Enterprise Edition
SQL Server 2005
2,129 2009
6.0 EP4
(Unicode)
11,770 2,943 18-May-09

Complete benchmark results may be found at the SAP benchmark website http://www.sap.com/benchmark.

Results and Configuration Summary

Hardware Configuration:

    One, Sun SPARC Enterprise T5440 Server
      4 x 1.6 GHz UltraSPARC T2 Plus processors (4 processors / 32 cores / 256 threads)
      256 GB memory
      3 x STK2540 each with 12 x 73GB/15KRPM disks

Software Configuration:

    Solaris 10
    SAP ECC Release: 6.0 Enhancement Pack 4 (Unicode)
    Oracle10g
SAE (Strategic Applications Engineering) and ISV-E (ISV Engineering) have submitted the following result for the SAP-SD 2-Tier benchmark. It was approved and published by SAP.

Certified Results

    Performance:
    4720 benchmark users
    SAP Certification:
    2009026

Benchmark Description

The SAP Standard Application SD (Sales and Distribution) Benchmark is a two-tier ERP business test that is indicative of full business workloads of complete order processing and invoice processing, and demonstrates the ability to run both the application and database software on a single system. The SAP Standard Application SD Benchmark represents the critical tasks performed in real-world ERP business environments.

SAP is one of the premier world-wide ERP application providers, and maintains a suite of benchmark tests to demonstrate the performance of competitive systems on the various SAP products.

See Also

Sun SPARC Enterprise T5440 Server Benchmark Details

Disclosure Statement

Two-tier SAP Sales and Distribution (SD) standard SAP ERP 6.0 2005/EP4 (Unicode) application benchmarks as of 07/21/09: Sun SPARC Enterprise T5440 Server (4 processors, 32 cores, 256 threads) 4,720 SAP SD Users, 4x 1.6 GHz UltraSPARC T2 Plus, 256 GB memory, Oracle10g, Solaris10, Cert# 2009026. HP ProLiant DL585 G6 (4 processors, 24 cores, 24 threads) 4,665 SAP SD Users, 4x 2.8 GHz AMD Opteron Processor 8439 SE, 64 GB memory, SQL Server 2008, Windows Server 2008 Enterprise Edition, Cert# 2009025. HP ProLiant BL685c G6 (4 processors, 24 cores, 24 threads) 4,422 SAP SD Users, 4x 2.6 GHz AMD Opteron Processor 8435, 64 GB memory, SQL Server 2008, Windows Server 2008 Enterprise Edition, Cert# 2009021. IBM System 550 (4 processors, 8 cores, 16 threads) 3,752 SAP SD Users, 4x 5 GHz Power6, 64 GB memory, DB2 9.5, AIX 6.1, Cert# 2009023. HP ProLiant DL585 G5 (4 processors, 16 cores, 16 threads) 3,430 SAP SD Users, 4x 3.1 GHz AMD Opteron Processor 8393 SE, 64 GB memory, SQL Server 2008, Windows Server 2008 Enterprise Edition, Cert# 2009008. HP ProLiant BL685 G6 (4 processors, 16 cores, 16 threads) 3,118 SAP SD Users, 4x 2.9 GHz AMD Opteron Processor 8389, 64 GB memory, SQL Server 2008, Windows Server 2008 Enterprise Edition, Cert# 2009007. NEC Express5800 (4 processors, 24 cores, 24 threads) 2,957 SAP SD Users, 4x 2.66 GHz Intel Xeon Processor X7460, 64 GB memory, SQL Server 2008, Windows Server 2008 Enterprise Edition, Cert# 2009018. Dell PowerEdge M905 (4 processors, 16 cores, 16 threads) 2,129 SAP SD Users, 4x 2.7 GHz AMD Opteron Processor 8384, 96 GB memory, SQL Server 2005, Windows Server 2003 Enterprise Edition, Cert# 2009017. Sun Fire X4600M2 (8 processors, 32 cores, 32 threads) 7,825 SAP SD Users, 8x 2.7 GHz AMD Opteron 8384, 128 GB memory, MaxDB 7.6, Solaris 10, Cert# 2008070. IBM System x3650 M2 (2 Processors, 8 Cores, 16 Threads) 5,100 SAP SD users,2x 2.93 Ghz Intel Xeon X5570, DB2 9.5, Windows Server 2003 Enterprise Edition, Cert# 2008079. HP ProLiant DL380 G6 (2 processors, 8 cores, 16 threads) 4,995 SAP SD Users, 2x 2.93 GHz Intel Xeon x5570, 48 GB memory, SQL Server 2005, Windows Server 2003 Enterprise Edition, Cert# 2008071. SAP, R/3, reg TM of SAP AG in Germany and other countries. More info www.sap.com/benchmark

Tuesday Jul 21, 2009

UltraSPARC T2 and T2 Plus Systems

Improved Performance Over 1.4 GHz

Reported 07/21/09

Significance of Results

Results are presented for the SPEC CPU2006 rate benchmarks run on the new 1.6 GHz Sun UltraSPARC T2 and Sun UltraSPARC T2 Plus processors based systems. The new processors were tested in the Sun CMT family of systems, including the Sun SPARC Enterprise T5120, T5220, T5240, T5440 servers and the Sun Blade T6320 server module.

SPECint_rate2006

  • The Sun SPARC Enterprise T5440 server, equipped with four 1.6 GHz UltraSPARC T2 Plus processor chips, delivered 57% and 37% better results than the best 4-chip IBM POWER6+ based systems on the SPEC CPU2006 integer throughput metrics.

  • The Sun SPARC Enterprise T5240 server equipped with two 1.6 GHz UltraSPARC T2 Plus processor chips, produced 68% and 48% better results than the best 2-chip IBM POWER6+ based systems on the SPEC CPU2006 integer throughput metrics.

  • The single-chip 1.6 GHz UltraSPARC T2 processor-based Sun CMT servers produced 59% to 68% better results than the best single-chip IBM POWER6 based systems on the SPEC CPU2006 integer throughput metrics.

  • On the four-chip Sun SPARC Enterprise T5440 server, when compared versus the 1.4 GHz version of this server, the new 1.6 GHz UltraSPARC T2 Plus processor delivered performance improvements of 25% and 20% as measured by the SPEC CPU2006 integer throughput metrics.

  • The new 1.6 GHz UltraSPARC T2 Plus processor, when put into the 2-chip Sun SPARC Enterprise T5240 server, delivered improvements of 20% and 17% when compared to the 1.4 GHz UltraSPARC T2 Plus processor based server, as measured by the SPEC CPU2006 integer throughput metrics.

  • On the single-chip Sun Blade T6320 server module, Sun SPARC Enterprise T5120 and T5220 servers, the new 1.6 GHz UltraSPARC T2 processor delivered performance improvements of 13% to 17% over the 1.4 GHz version of these servers, as measured by the SPEC CPU2006 integer throughput metrics.

  • The Sun SPARC Enterprise T5440 server, equipped with four 1.6 GHz UltraSPARC T2 Plus processor chips, delivered a SPECint_rate_base2006 score 3X the best 4-chip Itanium based system.

  • The Sun SPARC Enterprise T5440 server, equipped with four 1.6 GHz UltraSPARC T2 Plus processors, delivered a SPECint_rate_base2006 score of 338, a World Record score for 4-chip systems running a single operating system instance (i.e. SMP, not clustered).

SPECfp_rate2006

  • The Sun SPARC Enterprise T5440 server, equipped with four 1.6 GHz UltraSPARC T2 Plus processor chips, delivered 35% and 22% better results than the best 4-chip IBM POWER6+ based systems on the SPEC CPU2006 floating-point throughput metrics.

  • The Sun SPARC Enterprise T5240 server, equipped with two 1.6 GHz UltraSPARC T2 Plus processor chips, produced 40% and 27% better results than the best 2-chip IBM POWER6+ based systems on the SPEC CPU2006 floating-point throughput metrics.

  • The single 1.6 GHz UltraSPARC T2 processor based Sun CMT servers produced between 24% and 18% better results than the best single-chip IBM POWER6 based systems on the SPEC CPU2006 floating-point throughput metrics.

  • On the four chip Sun SPARC Enterprise T5440 server, the new 1.6 GHz UltraSPARC T2 Plus processor delivered performance improvements of 20% and 17% when compared to 1.4 GHz processors in the same system, as measured by the SPEC CPU2006 floating-point throughput metrics.

  • The new 1.6 GHz UltraSPARC T2 Plus processor, when put into a Sun SPARC Enterprise T5240 server, delivered an improvement of 12% when compared to the 1.4 GHz UltraSPARC T2 Plus processor based server as measured by the SPEC CPU2006 floating-point throughput metrics.

  • On the single processor Sun Blade T6320 server module, Sun SPARC Enterprise T5120 and T5220 servers, the new 1.6 GHz UltraSPARC T2 processor delivered a performance improvement over the 1.4 GHz version of these servers of between 11% and 10% as measured by the SPEC CPU2006 floating-point throughput metrics.

  • The Sun SPARC Enterprise T5440 server, equipped with four 1.6 GHz UltraSPARC T2 Plus processor chips, delivered a peak score 3X the best 4-chip Itanium based system, and base 2.9X, on the SPEC CPU2006 floating-point throughput metrics.

Performance Landscape

SPEC CPU2006 Performance Charts - bigger is better, selected results, please see www.spec.org for complete results. All results as of 7/17/09.

In the tables below
"Base" = SPECint_rate_base2006 or SPECfp_rate_base2006
"Peak" = SPECint_rate2006 or SPECfp_rate2006

SPECint_rate2006 results - 1 chip systems

System Processors Base
Copies
Performance Results Comments
Cores/
Chips
Type MHz Base Peak
Supermicro X8DAI 4/1 Xeon W3570 3200 8 127 136 Best Nehalem result
HP ProLiant BL465c G6 6/1 Opteron 2435 2600 6 82.1 104 Best Istanbul result
Sun SPARC T5220 8/1 UltraSPARC T2 1582 63 89.1 97.0 New
Sun SPARC T5120 8/1 UltraSPARC T2 1582 63 89.1 97.0 New
Sun Blade T6320 8/1 UltraSPARC T2 1582 63 89.2 96.7 New
Sun Blade T6320 8/1 UltraSPARC T2 1417 63 76.4 85.5
Sun SPARC T5120 8/1 UltraSPARC T2 1417 63 76.2 83.9
IBM System p 570 2/1 POWER6 4700 4 53.2 60.9 Best POWER6 result

SPECint_rate2006 - 2 chip systems

System Processors Base
Copies
Performance Results Comments
Cores/
Chips
Type MHz Base Peak
Fujitsu CELSIUS R670 8/2 Xeon W5580 3200 16 249 267 Best Nehalem result
Sun Blade X6270 8/2 Xeon X5570 2933 16 223 260
A+ Server 1021M-UR+B 12/2 Opteron 2439 SE 2800 12 168 215 Best Istanbul result
Sun SPARC T5240 16/2 UltraSPARC T2 Plus 1582 127 171 183 New
Sun SPARC T5240 16/2 UltraSPARC T2 Plus 1415 127 142 157
IBM Power 520 4/2 POWER6+ 4700 8 101 124 Best POWER6+ peak
IBM Power 520 4/2 POWER6+ 4700 8 102 122 Best POWER6+ base
HP Integrity rx2660 4/2 Itanium 9140M 1666 4 58.1 62.8 Best Itanium peak
HP Integrity BL860c 4/2 Itanium 9140M 1666 4 61.0 na Best Itanium base

SPECint_rate2006 - 4 chip systems

System Processors Base
Copies
Performance Results Comments
Cores/
Chips
Type MHz Base Peak
SGI Altix ICE 8200EX 16/4 Xeon X5570 2933 32 466 499 Best Nehalem result
Note: clustered, not SMP
Tyan Thunder n4250QE 24/4 Opteron 8439 SE 2800 24 326 417 Best Istanbul result
Sun SPARC T5440 32/4 UltraSPARC T2 Plus 1596 255 338 360 New. World record for
4-chip SMP
SPECint_rate_base2006
Sun SPARC T5440 32/4 UltraSPARC T2 Plus 1414 255 270 301
IBM Power 550 8/4 POWER6+ 5000 16 215 263 Best POWER6 result
HP Integrity BL870c 8/4 Itanium 9150N 1600 8 114 na Best Itanium result

SPECfp_rate2006 - 1 chip systems

System Processors Base
Copies
Performance Results Comments
Cores/
Chips
Type MHz Base Peak
Supermicro X8DAI 4/1 Xeon W3570 3200 8 102 106 Best Nehalem result
HP ProLiant BL465c G6 6/1 Opteron 2435 2600 6 65.2 72.2 Best Istanbul result
Sun SPARC T5220 8/1 UltraSPARC T2 1582 63 64.1 68.5 New
Sun SPARC T5120 8/1 UltraSPARC T2 1582 63 64.1 68.5 New
Sun Blade T6320 8/1 UltraSPARC T2 1582 63 64.1 68.5 New
Sun Blade T6320 8/1 UltraSPARC T2 1417 63 58.1 62.3
SPARC T5120 8/1 UltraSPARC T2 1417 63 57.9 62.3
SPARC T5220 8/1 UltraSPARC T2 1417 63 57.9 62.3
IBM System p 570 2/1 POWER6 4700 4 51.5 58.0 Best POWER6 result

SPECfp_rate2006 - 2 chip systems

System Processors Base
Copies
Performance Results Comments
Cores/
Chips
Type MHz Base Peak
ASUS TS700-E6 8/2 Xeon W5580 3200 16 201 207 Best Nehalem result
A+ Server 1021M-UR+B 12/2 Opteron 2439 SE 2800 12 133 147 Best Istanbul result
Sun SPARC T5240 16/2 UltraSPARC T2 Plus 1582 127 124 133 New
Sun SPARC T5240 16/2 UltraSPARC T2 Plus 1415 127 111 119
IBM Power 520 4/2 POWER6+ 4700 8 88.7 105 Best POWER6+ result
HP Integrity rx2660 4/4 Itanium 9140M 1666 4 54.5 55.8 Best Itanium result

SPECfp_rate2006 - 4 chip systems

System Processors Base
Copies
Performance Results Comments
Cores/
Chips
Type MHz Base Peak
SGI Altix ICE 8200EX 16/4 Xeon X5570 2933 32 361 372 Best Nehalem result
Tyan Thunder n4250QE 24/4 Opteron 8439 SE 2800 24 259 285 Best Istanbul result
Sun SPARC T5440 32/4 UltraSPARC T2 Plus 1596 255 254 270 New
Sun SPARC T5440 32/4 UltraSPARC T2 Plus 1414 255 212 230
IBM Power 550 8/4 POWER6+ 5000 16 188 222 Best POWER6+ result
HP Integrity rx7640 8/4 Itanium 2 9040 1600 8 87.4 90.8 Best Itanium result

Results and Configuration Summary

Test Configurations:


Sun Blade T6320
1.6 GHz UltraSPARC T2
64 GB (16 x 4GB)
Solaris 10 10/08
Sun Studio 12, Sun Studio 12 Update 1, gccfss V4.2.1

Sun SPARC Enterprise T5120/T5220
1.6 GHz UltraSPARC T2
64 GB (16 x 4GB)
Solaris 10 10/08
Sun Studio 12, Sun Studio 12 Update 1, gccfss V4.2.1

Sun SPARC Enterprise T5240
2 x 1.6 GHz UltraSPARC T2 Plus
128 GB (32 x 4GB)
Solaris 10 5/09
Sun Studio 12, Sun Studio 12 Update 1, gccfss V4.2.1

Sun SPARC Enterprise T5440
4 x 1.6 GHz UltraSPARC T2 Plus
256 GB (64 x 4GB)
Solaris 10 5/09
Sun Studio 12 Update 1, gccfss V4.2.1

Results Summary:



T6320 T5120 T5220 T5240 T5440
SPECint_rate_base2006 89.2 89.1 89.1 171 338
SPECint_rate2006 96.7 97.0 97.0 183 360
SPECfp_rate_base2006 64.1 64.1 64.1 124 254
SPECfp_rate2006 68.5 68.5 68.5 133 270

Benchmark Description

SPEC CPU2006 is SPEC's most popular benchmark, with over 7000 results published in the three years since it was introduced. It measures:

  • "Speed" - single copy performance of chip, memory, compiler
  • "Rate" - multiple copy (throughput)

The rate metrics are used for the throughput-oriented systems described on this page. These metrics include:

  • SPECint_rate2006: throughput for 12 integer benchmarks derived from real applications such as perl, gcc, XML processing, and pathfinding
  • SPECfp_rate2006: throughput for 17 floating point benchmarks derived from real applications, including chemistry, physics, genetics, and weather.

There are "base" variants of both the above metrics that require more conservative compilation, such as using the same flags for all benchmarks.

See here for additional information.

Key Points and Best Practices

Result on this page for the Sun SPARC Enterprise T5120 server were measured on a Sun SPARC Enterprise T5220. The Sun SPARC Enterprise T5120 and Sun SPARC Enterprise T5220 are electronically equivalent. A SPARC Enterprise 5120 can hold up to 4 disks, and a T5220 can hold up to 8. This system was tested with 4 disks; therefore, results on this page apply to both the T5120 and the T5220.

Know when you need throughput vs. speed. The Sun CMT systems described on this page provide massive throughput, as demonstrated by the fact that up to 255 jobs are run on the 4-chip system, 127 on 2-chip, and 63 on 1-chip. Some of the competitive chips do have a speed advantage - e.g. Nehalem and Istanbul - but none of the competitive results undertake to run the large number of jobs tested on Sun's CMT systems.

Use the latest compiler. The Sun Studio group is always working to improve the compiler. Sun Studio 12, and Sun Studio 12 Update 1, which are used in these submissions, provide updated code generation for a wide variety of SPARC and x86 implementations.

I/O still counts. Even in a CPU-intensive workload, some I/O remains. This point is explored in some detail at http://blogs.sun.com/jhenning/entry/losing_my_fear_of_zfs.

Disclosure Statement

SPEC, SPECint, SPECfp reg tm of Standard Performance Evaluation Corporation. Competitive results from www.spec.org as of 16 July 2009. Sun's new results quoted on this page have been submitted to SPEC.
Sun Blade T6320 89.2 SPECint_rate_base2006, 96.7 SPECint_rate2006, 64.1 SPECfp_rate_base2006, 68.5 SPECfp_rate2006;
Sun SPARC Enterprise T5220/T5120 89.1 SPECint_rate_base2006, 97.0 SPECint_rate2006, 64.1 SPECfp_rate_base2006, 68.5 SPECfp_rate2006;
Sun SPARC Enterprise T5240 172 SPECint_rate_base2006, 183 SPECint_rate2006, 124 SPECfp_rate_base2006, 133 SPECfp_rate2006;
Sun SPARC Enterprise T5440 338 SPECint_rate_base2006, 360 SPECint_rate2006, 254 SPECfp_rate_base2006, 270 SPECfp_rate2006;
Sun Blade T6320 76.4 SPECint_rate_base2006, 85.5 SPECint_rate2006, 58.1 SPECfp_rate_base2006, 62.3 SPECfp_rate2006;
Sun SPARC Enterprise T5220/T5120 76.2 SPECint_rate_base2006, 83.9 SPECint_rate2006, 57.9 SPECfp_rate_base2006, 62.3 SPECfp_rate2006;
Sun SPARC Enterprise T5240 142 SPECint_rate_base2006, 157 SPECint_rate2006, 111 SPECfp_rate_base2006, 119 SPECfp_rate2006;
Sun SPARC Enterprise T5440 270 SPECint_rate_base2006, 301 SPECint_rate2006, 212 SPECfp_rate_base2006, 230 SPECfp_rate2006;
IBM p 570 53.2 SPECint_rate_base2006, 60.9 SPECint_rate2006, 51.5 SPECfp_rate_base2006, 58.0 SPECfp_rate2006;
IBM Power 520 102 SPECint_rate_base2006, 124 SPECint_rate2006, 88.7 SPECfp_rate_base2006, 105 SPECfp_rate2006;
IBM Power 550 215 SPECint_rate_base2006, 263 SPECint_rate2006, 188 SPECfp_rate_base2006, 222 SPECfp_rate2006;
HP Integrity BL870c 114 SPECint_rate_base2006;
HP Integrity rx7640 87.4 SPECfp_rate_base2006, 90.8 SPECfp_rate2006.

Tuesday Jul 21, 2009

Significance of Results

The Sun Blade T6320 server module equipped with one UltraSPARC T2 processor running at 1.6 GHz delivered a World Record single-chip result while running the SPECjbb2005 benchmark.

  • The Sun Blade T6320 server module powered by one 1.6 GHz UltraSPARC T2 processor delivered a result of 229576 SPECjbb2005 bops, 28697 SPECjbb2005 bops/JVM when running the SPECjbb2005 benchmark.
  • The Sun Blade T6320 server module (with one 1.6 GHz UltraSPARC T2 processor) demonstrated 2.6X better performance than the IBM System p 570 with one 4.7 GHz POWER6 processor.
  • The Sun Blade T6320 server module (with one 1.6 GHz UltraSPARC T2 processor) demonstrated 3% better performance than the Fujitsu TX100 result which used one 3.16 GHz Intel Xeon X3380 processor.
  • The Sun Blade T6320 server module (with one 1.6 GHz UltraSPARC T2 processor) demonstrated 7% better performance than the IBM x3200 result which used one 3.16 GHz Xeon X3380 processor.
  • The Sun Blade T6320 server module running the 1.6 GHz UltraSPARC T2 processor delivered 20% better performance than a Sun SPARC Enterprise T5120 with the 1.4 GHz UltraSPARC T2 processor.
  • The Sun Blade T6320 used the OpenSolaris 2009.06 operation system and the Java HotSpot(TM) 32-Bit Server, Version 1.6.0_14 Performance Release JVM to obtain this leading result.

Performance Landscape

SPECjbb2005 Performance Chart (ordered by performance)

bops: SPECjbb2005 Business Operations per Second (bigger is better)

System Processors Performance
Chips Cores Threads GHz Type SPECjbb2005
bops
SPECjbb2005
bops/JVM
Sun Blade T6320 1 8 64 1.6 UltraSPARC T2 229576 28697
Fujitsu TX100 1 4 4 3.16 Intel Xeon 223691 111846
IBM x3200 M2 1 4 4 3.16 Intel Xeon 214578 107289
Fujitsu RX100 1 4 4 3.16 Intel Xeon 211144 105572
IBM x3350 1 4 4 3.0 Intel Xeon 194256 97128
Sun SE T5120 1 8 64 1.4 UltraSPARC T2 192055 24007
IBM p 570 1 2 4 4.7 POWER6 88089 88089

Complete benchmark results may be found at the SPEC benchmark website http://www.spec.org.

Results and Configuration Summary

Hardware Configuration:

    Sun Blade T6320
      1 x 1.6 GHz UltraSPARC T2 processor
      64 GB

Software Configuration:

    OpenSolaris 2009.06
    Java HotSpot(TM) 32-Bit Server, Version 1.6.0_14 Performance Release

Benchmark Description

SPECjbb2005 (Java Business Benchmark) measures the performance of a Java implemented application tier (server-side Java). The benchmark is based on the order processing in a wholesale supplier application. The performance of the user tier and the database tier are not measured in this test. The metrics given are number of SPECjbb2005 bops (Business Operations per Second) and SPECjbb2005 bops/JVM (bops per JVM instance).

Key Points and Best Practices

  • Enhancements to the JVM had a major impact on performance.
  • Each JVM executed in the FX scheduling class to improve performance by reducing the frequency of context switches.
  • Each JVM bound to a separate processor containing 1 core to reduce memory access latency using the physical memory closest to the processor.

See Also

Disclosure Statement

SPEC, SPECjbb reg tm of Standard Performance Evaluation Corporation. Results as of 7/17/2009 on http://www.spec.org. SPECjbb2005, Sun Blade T6320 229576 SPECjbb2005 bops, 28697 SPECjbb2005 bops/JVM; IBM p 570 88089 SPECjbb2005 bops, 88089 SPECjbb2005 bops/JVM; Fujitsu TX100 223691 SPECjbb2005 bops, 111846 SPECjbb2005 bops/JVM; IBM x3350 194256 SPECjbb2005 bops, 97128 SPECjbb2005 bops/JVM; Sun SPARC Enterprise T5120 192055 SPECjbb2005 bops, 24007 SPECjbb2005 bops/JVM.

Friday Jul 10, 2009

Significance of Results

Sun and Microsoft combined to deliver World Record price performance for Windows based results on the TPC-H benchmark at the 300GB scale factor. Using Microsoft's SQL Server 2008 Enterprise database along with Microsoft Windows Server 2008 operating system on the Sun Fire X4600 M2 server, the result of 2.80 $/QphH@300GB (USD) was delivered.

  • The Sun Fire X4600 M2 provides World Record price-performance of 2.80 $/QphH@300GB (USD) among Windows based TPC-H results at the 300GB scale factor. This result is 14% better price performance than the HP DL785 result.
  • The Sun Fire X4600 M2 trails HP's World Record single system performance (HP: 57,684 QphH@300GB, Sun: 55,185 QphH@300GB) by less than 5%.
  • The Sun/SQL Server solution used fewer disks for the database (168) than the other top performance leaders @300GB.
  • IBM required 79% more disks (300 total) than Sun to get a result of 46,034 QphH@300GB which is 20% below Sun's QphH.
  • HP required 21% more disks (204 total) than Sun to achieve a result of 3.24 $/QphH@300GB (USD) which is 16% worse than Sun's price performance.

This is Sun's first published TPC-H SQL Server benchmark.

Performance Landscape

ch/co/th = chips, cores, threads
$/QphH = TPC-H Price/Performance metric (smaller is better)

System ch/co/th Processor Database QphH $/QphH Price Disks Available
Sun Fire X4600 M2 8/32/32 2.7 Opteron 8384 SQL Server 2008 55,158 2.80 $154,284 168 07/06/09
HP DL785 8/32/32 2.7 Opteron 8384 SQL Server 2008 57,684 3.24 $186,700 204 11/17/08
IBM x3950 M2 8/32/32 2.93 Intel X7350 SQL Server 2005 46,034 5.40 $248,635 300 03/07/08

Complete benchmark results may be found at the TPC benchmark website http://www.tpc.org.

Results and Configuration Summary

Server:

    Sun Fire X4600 M2 with:
      8 x AMD Opteron 8384, 2.7 GHz QC processors
      256 GB memory
      3 x 73GB (15K RPM) internal SAS disks

Storage:

    14 x Sun Storage J4200 each consisting of 12 x 146GB 15,000 RPM SAS disks

Software:

    Operating System: Microsoft Windows Server 2008 Enterprise x64 Edition SP1
    Database Manager: SQL Server 2008 Enterprise x64 Edition SP1

Audited Results:

    Database Size: 300GB (Scale Factor)
    TPC-H Composite: 55,157.5 QphH@300GB
    Price/performance: $2.80 / QphH@300GB (USD)
    Available: July 6, 2009
    Total 3 Year Cost: $154,284.19 (USD)
    TPC-H Power: 67,095.6
    TPC-H Throughput: 45,343.5
    Database Load Time: 17 hours 29 minutes
    Storage Ratio: 76.82

Benchmark Description

The TPC-H benchmark is a performance benchmark established by the Transaction Processing Council (TPC) to demonstrate Data Warehousing/Decision Support Systems (DSS). TPC-H measurements are produced for customers to evaluate the performance of various DSS systems. These queries and updates are executed against a standard database under controlled conditions. Performance projections and comparisons between different TPC-H Database sizes (100GB, 300GB, 1000GB, 3000GB and 10000GB) are not allowed by the TPC.

TPC-H is a data warehousing-oriented, non-industry-specific benchmark that consists of a large number of complex queries typical of decision support applications. It also includes some insert and delete activity that is intended to simulate loading and purging data from a warehouse. TPC-H measures the combined performance of a particular database manager on a specific computer system.

The main performance metric reported by TPC-H is called the TPC-H Composite Query-per-Hour Performance Metric (QphH@SF, where SF is the number of GB of raw data, referred to as the scale factor). QphH@SF is intended to summarize the ability of the system to process queries in both single and multi user modes. The benchmark requires reporting of price/performance, which is the ratio of QphH to total HW/SW cost plus 3 years maintenance. A secondary metric is the storage efficiency, which is the ratio of total configured disk space in GB to the scale factor.

Key Points and Best Practices

SQL Server 2008 is able to take advantage of the lower latency local memory access provides on the Sun Fire 4600 M2 server. This was achieved by setting the NUMA initialization parameter to enable all NUMA optimizations.

Enabling the Windows large-page feature provided a significant performance improvement. Because SQL Server 2008 manages its own memory buffer, the use of large-pages resulted in significant performance increase. Note that to use large-pages, an application must be part of the large-page group of the OS (Windows).

The 64-bit Windows OS and 64-bit SQL Server software were able to utilize the 256GB of memory available on the Sun Fire 4600 M2 server.

See Also

Disclosure Statement

TPC-H@300GB: Sun Fire X4600 M2 55,158 QphH@300GB, $2.80/QphH@300GB, availability 7/6/09; HP DL785, 57,684 QphH@300GB, $3.24/QphH@300GB, availability 11/17/08; IBM x3950 M2, 46,034 QphH@300GB, $5.40/QphH@300GB, availability 03/07/08; TPC-H, QphH, $/QphH tm of Transaction Processing Performance Council (TPC). More info www.tpc.org.

This blog copyright 2009 by John Henning

- Page 2 -

Monday Jul 06, 2009

Significance of Results

The Sun Blade X6275 cluster, equipped with 2.93 GHz Intel QC X5570 processors and QDR InfiniBand interconnect, delivered the best performance at 32, 64 and 128 cores for the RADIOSS Neon_1M and Taurus_Frontal benchmarks.

  • Using half the nodes (16), the Sun Blade X6275 cluster was 3% faster than the 32-node SGI cluster running the Neon_1M test case.
  • In the 128-core configuration, the Sun Blade X6275 cluster was 49% faster than the SGI cluster running the Neon_1M test case.
  • In the 128-core configuration, the Sun Blade X6275 cluster was 49% faster than the SGI cluster running the Neon_1M test case.
  • In the 128-core configuration, the Sun Blade X6275 cluster was 16% faster than the top SGI cluster running the Taurus_Frontal test case.
  • At both the 32- and 64-core levels the Sun Blade X6275 cluster was 60% faster running the Neon_1M test case.
  • At both the 32- and 64-core levels the Sun Blade X6275 cluster was 4% faster running the Taurus_Frontal test case.

Performance Landscape


RADIOSS Public Benchmark Test Suite
Results are Total Elapsed Run Times (sec.)

System
cores Benchmark Test Case
TAURUS_FRONTAL
1.8M
NEON_1M
1.06M
NEON_300K
277K

SGI Altix ICE 8200 IP95 2.93GHz, 32 nodes, DDR 256 3559 1672 310

Sun Blade X6275 2.93GHz, 16 nodes, QDR 128 4397 1627 361
SGI Altix ICE 8200 IP95 2.93GHz, 16 nodes, DDR 128 5033 2422 360

Sun Blade X6275 2.93GHz, 8 nodes, QDR 64 5934 2526 587
SGI Altix ICE 8200 IP95 2.93GHz, 8 nodes, DDR 64 6181 4088 584

Sun Blade X6275 2.93GHz, 4 nodes, QDR 32 9764 4720 1035
SGI Altix ICE 8200 IP95 2.93GHz, 4 nodes, DDR 32 10120 7574 1017

Results and Configuration Summary

Hardware Configuration:
    8 x Sun Blade X6275
    2x2.93 GHz Intel QC X5570 processors, turbo enabled (per half blade)
    24 GB (6 x 4GB 1333 MHz DDR3 dimms)
    InfiniBand QDR interconnects

Software Configuration:

    OS: 64-bit SUSE Linux Enterprise Server SLES 10 SP 2
    Application: RADIOSS V9.0 SP 1
    Benchmark: RADIOSS Public Benchmark Test Suite

Benchmark Description

Altair has provided a suite of benchmarks to demonstrate the performance of RADIOSS. The initial set of benchmarks provides four automotive crash models. Future updates will add in marine and aerospace applications, as well as including automotive NVH applications. The benchmarks use real data, requiring double precision computations and the parith feature (Parallel arithmetic algorithm) to obtain exactly the same results whatever the number of processors used.

Please go here for a more complete description of the tests.

Key Points and Best Practices

The Intel QC X5570 processors include a turbo boost feature coupled with a speed-step option in the CPU section of the Advanced BIOS settings. Under specific circumstances, this can provide cpu overclocking which increases the processor frequency from 2.93GHz to 3.2GHz. This feature was was enabled when generating the results reported here.

Node to Node MPI ping-pong tests show a bandwidth of 3000 MB/sec on the Sun Blade X6275 cluster using QDR. The same tests performed on a Sun Fire X2270 cluster and equipped with DDR interconnect produced a bandwidth of 1500 MB/sec. On another recent Intel based Sun Fire X2250 cluster (3.4 GHz DC E5272 processors) also equipped with DDR interconnects, the bandwidth was 1250 MB/sec. This same Sun Fire X2250 cluster equipped with SDR IB interconnect produced an MPI ping-pong bandwidth of 975 MB/sec.

See Also

Current RADIOSS Benchmark Results:
http://www.altairhyperworks.com/Benchmark.aspx

Disclosure Statement

All information on the Fluent website is Copyright 2009 Altair Engineering, Inc. All Rights Reserved. Results from http://www.altairhyperworks.com/Benchmark.aspx

Tuesday Jun 23, 2009

Significance of Results

A Sun Constellation system, composed of 48 Sun Blade X6440 server modules in a Sun Blade 6048 chassis, running OpenSolaris 2008.11 and using the Sun Studio 12 Update 1 compiler delivered World Record SPEC CPU2006 rate results.

On the SPECint_rate_base2006 benchmark, Sun delivered 4.7 times more performance than the IBM power 595 (5GHz POWER6); this IBM system requires a slightly larger cabinet than the Sun Blade 6048 chassis (details below).

On the SPECfp_rate_base2006 benchmark Sun delivered 3.9 times more performance than the largest IBM power 595 (5GHz POWER6); this IBM system requires a slightly larger cabinet than the Sun Blade 6048 chassis (details below).

  • The Sun Constellation System equipped with AMD Opteron QC 8384 2.7 GHz processors, running OpenSolaris 2008.11 and using the Sun Studio 12 update 1 compiler, delivered the World Record SPECint_rate_base2006 score of 8840.
  • This SPECint_rate_base2006 score beat the previous record holding score by over three times.
  • The Sun Constellation System equipped with AMD Opteron QC 8384 2.7 GHz processors, running OpenSolaris 2008.11 and using the Sun Studio 12 update 1 compiler, delivered the fastest x86 SPECfp_rate_base2006 score of 6500.
  • This SPECfp_rate_base2006 score beat the previous x86 record holding score by nine times.

Performance Landscape

SPEC CPU2006 Performance Charts - bigger is better, selected results, please see www.spec.org for complete results.

SPECint_rate2006

System Processors Performance Results Notes (1)
Type GHz Chips Cores Peak Base
Sun Blade 6048 Opteron 8384 2.7 192 768
8840 New Record
SGI Altix 4700 Density System Itanium 9150M 1.66 128 256 3354 2893 Previous Best
SGI Altix 4700 Bandwidth System Itanium2 9040 1.6 128 256 2971 2715
Fujitsu/Sun SPARC Enterprise M9000 SPARC64 VII 2.52 64 256 2290 2090
IBM Power 595 POWER6 5.0 32 64 2160 1870 Best POWER6

(1) Results as of 23 June 2009 from www.spec.org.

SPECfp_rate2006

System Processors Performance Results Notes (2)
Type GHz Chips Cores Peak Base
SGI Altix 4700 Density System Itanium 9140M 1.66 512 1024
10580
Sun Blade 6048 Opteron 8384 2.7 192 768
6500 New x86 Record
SGI Altix 4700 Bandwidth System Itanium2 9040 1.6 128 256 3507 3419
IBM Power 595 POWER 6 5.0 64 32 2184 1681 Best POWER6
Fujitsu/Sun SPARC Enterprise M9000 SPARC64 VII 2.52 64 256 2005 1861
SGI Altix 4700 Bandwidth System Itanium 9150M 1.66 128 256 1947 1832
SGI Altix ICE 8200EX Intel X5570 2.93 8 32 742 723

(2) Results as of 23 June 2009 from www.spec.org.

(2) Results as of 23 June 2009 from www.spec.org.

Results and Configuration Summary

Hardware Configuration:
    1 x Sun Blade 6048
      48 x Sun Blade X6440, each with
        4 x 2.7 GHz QC AMD Opteron 8384 processors
        32 GB, (8 x 4GB)

Software Configuration:

    O/S: OpenSolaris 2008.11
    Compiler: Sun Studio 12 Update 1
    Other SW: MicroQuill SmartHeap Library 9.01 x64
    Benchmark: SPEC CPU2006 V1.1

Key Points and Best Practices

The Sun Blade 6048 chassis is able to contain a variety of server modules. In this case, the Sun Blade X6440 was used to provide this capacity solution. This single rack delivered results which have not been seen in this form factor.

To run this many jobs, the benchmark requires a reasonably good file server where the benchmark is run. The Sun Fire X4540 server was used to provide the disk space required being accessed by NFS by the blades.

Sun has shown 4.7x greater SPECint_rate_base2006 and 3.9x greater SPECfp_rate_base2006 in a slightly smaller cabinet. IBM specifications are at: http://www-03.ibm.com/systems/power/hardware/595/specs.html. One frame (slimline doors): 79.3"H x 30.5"W x 58.5"D weight: 3,376 lb. One frame (acoustic doors): 79.3"H x 30.5"W x 71.1"D weight: 3,422 lb. The Sun Blade 6048 specifications are at: http://www.sun.com/servers/blades/6048chassis/specs.xml One Sun Blade 6048: 81.6"H x 23.9"W x 40.3"D weight: 2,300 lb (fully configured).

Disclosure Statement:

SPEC, SPECint, SPECfp reg tm of Standard Performance Evaluation Corporation. Results from www.spec.org as of 6/22/2009 and this report. Sun Blade 6048 chassis with Sun Blade X6440 server modules (48 nodes with 4 chips, 16 cores, 16 threads each, OpenSolaris 2008.11, Studio 12 update 1) - 8840 SPECint_rate_base2006, 6500 SPECfp_rate_base2006; IBM p595, 1870 SPECint_rate_base2006, 1681 SPECfp_rate_base2006.

See Also

Tuesday Jun 23, 2009

Significance of Multiple World Records

The Sun Blade X6275 server module, equipped with two Intel QC Xeon X5570 2.93 GHz processors and running the OpenSolaris 2009.06 operating system delivered the best SPECfp2006 and SPECint2006 results to date.
  • The Sun Blade X6275 server module using the Sun Studio 12 update 1 compiler and the OpenSolaris 2009.06 operating system delivered a World Record SPECfp2006 result of 50.8.

  • This SPECfp2006 result beats the best result by the competition, using the same processor type, by 20%.
  • The Sun Blade X6275 server module using the Sun Studio 12 update 1 compiler and the OpenSolaris 2009.06 operating system delivered a World Record SPECint2006 result of 37.4.

  • This SPECint2006 result just tops the best result by the competition even though that result used the 9% faster clock W-series chip of the Nehalem family.
Sun(TM) Studio 12 Update 1 contains new features and enhancements to boost performance and simplify the creation of high-performance parallel applications for the latest multicore x86 and SPARC-based systems running on leading Linux platforms, the Solaris(TM) Operating System (OS) or OpenSolaris(TM). The Sun Studio 12 Update 1 software has set almost a dozen industry benchmark records to date, and in conjunction with the freely available community-based OpenSolaris 2009.06 OS, was instrumental in landing four new ground-breaking SPEC CPU2006 results.

Sun Studio 12 Update 1 includes improvements in the compiler's ability to automatically parallelise codes - afterall the easiest way to develop parallel applications is if the compiler can do it for you; improvements to the support of parallelisation specifications like OpenMP, this release includes support for the latest OpenMP 3.0 specification; and improvements in the tools and their ability to provide the developer meaningful feedback about parallel code, for example the ability of the Performance Analyzer to profile MPI code.

Performance Landscape

SPEC CPU2006 Performance Charts - bigger is better, selected results, please see www.spec.org for complete results.

SPECint2006

System Processors Performance Results Notes (1)
Type GHz Chips Cores Peak Base
Sun Blade X6275 Xeon X5570 2.93 2 8 37.4 31.0 New Record
ASUS TS700-E6 (Z8PE-D12X) Xeon W5580 3.2 2 8 37.3 33.2 Previous Best
Fujitsu R670 Xeon W5580 3.2 2 8 37.2 33.2
Sun Blade X6270 Xeon X5570 2.93 2 8 36.9 32.0
Fujitsu Celsius R570 Xeon X5570 2.93 2 8 36.3 32.2
YOYOtech MLK1610 Intel Core i7-965 3.73 1 4 36.0 32.5
HP ProLiant DL585 G5 Opteron 8393 3.1 1 1 23.4 19.7 Best Opteron
IBM System p570 POWER6 4.70 1 1 21.7 17.8 Best POWER6

(1) Results as of 22 June 2009 from www.spec.org and this report.

SPECfp2006

System Processors Performance Results Notes (2)
Type GHz Chips Cores Peak Base
Sun Blade X6275 Xeon X5570 2.93 2 8 50.8 44.2 New Record
Sun Blade X6270 Xeon X5570 2.93 2 8 50.4 45.0 Previous Best
Sun Blade X4170 Xeon X5570 2.93 2 8 48.9 43.9
Fujitsu R670 Xeon W5580 3.2 2 8 42.2 39.5
HP ProLiant DL585 G5 Opteron 8393 3.1 2 8 25.9 23.6 Best Opteron
IBM Power 595 POWER6 5.00 1 1 24.9 20.1 Best POWER6

(2) Results as of 22 June 2009 from www.spec.org and this report.

Results and Configuration Summary

Hardware Configuration:
    Sun Blade X6275
      2 x 2.93 GHz QC Intel Xeon X5570 processors, turbo enabled
      24 GB, (6 x 4GB DDR3-1333 DIMM)
      1 x 146 GB, 10000 RPM SAS disk

Software Configuration:

    O/S: OpenSolaris 2009.06
    Compiler: Sun Studio 12 Update 1
    Other SW: MicroQuill SmartHeap Library 9.01 x64
    Benchmark: SPEC CPU2006 V1.1

Key Points and Best Practices

These results show that choosing the right compiler for the job can maximize one's investment in hardware. The Sun Studio compiler teamed with the OpenSolaris operating system allows one to tackle hard problems to get quick solution turnaround.
  • Autoparallelism was used to deliver the maximum time to solution. These results show that autoparallel compilation is a very viable compilation option that should be considered when one needs the quickest turnaround of results. Note that not all codes benefit from this optimization, just like they can't always take advantage of other compiler optimization techniques.
  • OpenSolaris 2009.06 was able to fully take advantage of the turbo mode of the Nehalem family of processors.

Disclosure Statement

SPEC, SPECint, SPECfp reg tm of Standard Performance Evaluation Corporation. Results from www.spec.org as of 6/22/2009. Sun Blade X6275 (Intel X5570, 2 chips, 8 cores) 50.8 SPECfp2006, 37.4 SPECint2006; ASUS TS700-E6 (Intel W5570, 2 chips, 8 cores) 37.3 SPECint2006; Fujitsu R670 (Intel X5570, 2 chips, 8 cores) 42.2 SPECfp2006.

Tuesday Jun 23, 2009

A new and exceptional TPC-H result submitted today has been obtained on a cluster of 43 Sun Fire X4540 servers, each equipped with two AMD Opteron 2356 2.3 GHz processors, running ParAccel Analytic Database on Sun OpenSolaris 2009.06. The Sun/ParAccel Cluster achieved a result of 1,050,556.20 QphH @30000GB with a price performance of $2.86/QphH @30000GB.

This is an incredible World Record for both performance and price-performance at the largest TPC-H Scale Factor (30TB) to date.

As of today, the only other 30TB result posted is on a single HP Superdome, powered by 64 x 1.6 GHz Itanium2 Dual Core processors running Oracle 10gR2. The HP result is 150,960 QphH @30000GB with a price-performance of $46.69/QphH @30000GB.

This result establishes the overall leadership of the Sun/ParAccel/OpenSolaris cluster solution in Decision Support Systems (DSS) and Data Warehousing.

  • The Sun Fire X4540 / ParAccel Cluster was over seven time (7x) faster than the HP Superdome and had sixteen times (16x) better price-performance. In addition, the total cost of the Sun/ParAccel configuration (H/W + S/W + 3 years maintenance) is less than half of the total cost of the HP/Oracle configuration.

  • The Sun Fire X4540 cluster storage consisted entirely of fully mirrored internal drives. There were almost 1000 fewer disk spindles than the HP Superdome solution (2064 vs. 3072 disks), resulting in an enormous reduction of hardware logistics, at a fraction of the floor space (172 RU vs 1120 RU).

  • This solution is one of the TPC-H new-generation DSS DBMS (column based, shared nothing, data compression, etc.) results. It is noteworthy that all of the other new generation TPC-H submissions (at 100GB up to 3000GB) ran queries entirely from memory. This new result is disk based and thus establishes the leadership and viability of the Sun/ParAccel/OpenSolaris solution on shared nothing clusters for very large disk based databases -- much larger than memory sizes realistically available even in extremely large database installations.

  • There are a number of new generation DBMS designed for Decision Support such as ParAccel, either currently for sale or still under development, all implemented on Linux. This result is the first public proof point of a new generation data warehousing product running on Solaris, more specifically OpenSolaris.

  • The load time of the 30TB database on the Sun/ParAccel cluster was 4 times faster than the HP Superdome solution. For large DSS databases, load time is a very important factor.

Performance Landscape

ch/co/th = chips, cores, threads
$/QphH = TPC-H Price/Performance metric (smaller is better)
QphH = TPC-H Composite Metric (bigger is better)


System
ch/co/th Database QphH $/QphH Price # Disks Available
43 x Sun Fire X4275 86/344/344 PADB 1,050,566 2.86 $3,006,861 2064
06/21/09
1 x HP Superdome 64/128/128 Oracle 150,960 46.69 $7,048,342 3072
06/18/07

Complete benchmark results may be found at the TPC benchmark website http://www.tpc.org.

Results and Configuration Summary

Servers:

    43 X Sun Fire X4540 each with:
      2 x AMD Opteron 2356, 2.3 GHz QC processors
      64 GB memory
      48 x 500GB (7,200 RPM) internal SATA disks
    86 total processors
    344 total processor cores
    344 total processor threads

Storage:

    No external storage

Switches:

    3 x 48 port Cisco 3750 + 4 x Cisco 3750 24 port 1Gb Ethernet Switches

Software:

    Operating System: OpenSolaris 2009.06
    Database Manager: ParAccel PADB

Audited Results:

    Database Size: 30,000 GB (Scale Factor)
    TPC-H Composite: 1,050,566.20 QphH@30000GB
    Price/performance: $2.86 / QphH@30000GB
    Available: June 21, 2009
    Total 3 Year Cost: $3,006,861
    TPC-H Power: 1,326,910.40
    TPC-H Throughput: 831,758.00
    Database Load Time: ~3 Hours 29 minutes
    Storage Ratio: 32.04

Benchmark Description

The TPC-H benchmark is a performance benchmark established by the Transaction Processing Council (TPC) to demonstrate Data Warehousing/Decision Support Systems (DSS). TPC-H measurements are produced for customers to evaluate the performance of various DSS systems. These queries and updates are executed against a standard database under controlled conditions. Performance projections and comparisons between different TPC-H Database sizes (100GB, 300GB, 1000GB, 3000GB and 10000GB) are not allowed by the TPC.

TPC-H is a data warehousing-oriented, non-industry-specific benchmark that consists of a large number of complex queries typical of decision support applications. It also includes some insert and delete activity that is intended to simulate loading and purging data from a warehouse. TPC-H measures the combined performance of a particular database manager on a specific computer system.

The main performance metric reported by TPC-H is called the TPC-H Composite Query-per-Hour Performance Metric (QphH@SF, where SF is the number of GB of raw data, referred to as the scale factor). QphH@SF is intended to summarize the ability of the system to process queries in both single and multi user modes. The benchmark requires reporting of price/performance, which is the ratio of QphH to total HW/SW cost plus 3 years maintenance. A secondary metric is the storage efficiency, which is the ratio of total configured disk space in GB to the scale factor.

Key Technical Points

ParAccel PADB is one of a new generation of DBMS designed specifically for Decision Support and Data Warehousing applications.The Sun Fire X4540 and OpenSolaris2009.06 are a perfect match for the PADB solution. The Sun Fire X4540 with its large amount of internal storage in a compact form factor and OpenSolaris with ISM shared memory management, network performance and powerful Dtrace performance analysis tools.
Below are the main architectural features of the ParAccel product:

Shared Nothing Architecture

Shared nothing is the most optimal hardware architecture for highly parallel database operations in DSS environments. The inherent divide and conquer approach of distributing data over many nodes proportionally reduces the amount of work each node must do and thus has the potential for near linear scalability.

Column Based Physical Storage

Relational tables can be physically stored on disk in a row oriented fashion, or in a column oriented fashion. In the row oriented option, all columns of each row are stored contiguously on disk. By contrast, the column oriented option stores all the values of each column contiguously on disk. The choice of row store vs. column store may at first glance seem arbitrary, but in fact has profound consequences on the amount of I/O bandwidth, memory bandwidth and CPU requirements necessary for processing various types of queries.

Aggressive Data Compression

There are dozens of known techniques for storing data in a manner requiring fewer bytes than the original plain form of the data. The techniques are referred to as data compression algorithms. ParAccel uses several very effective data compression techniques. Compression is beneficial for query processing in that it reduces the amount of data that needs to be read from disk, and the amount of main memory space needed for processing the data. Both of these characteristics lead to query processing efficiencies and cost efficiencies.

Low Cost Servers and Interconnects

The ParAccel software does not require expensive proprietary hardware. Shared nothing clusters of small and low cost systems can provide adequate memory for aggressively compressed database engines to achieve performance levels far above the levels achievable by conventional database engines. In addition, the software does not require expensive special networking infrastructure but instead provides excellent performance just running on standard GbE equipment.

See Also

Disclosure Statement

TPC-H@30000GB Sun Fire X4540 1,050,566 QphH@30000GB, $2.86/QphH@30000GB, availability 6/21/09. TPC-H, HP Integrity Superdome, 150,960 QphH @30000GB, $46.69 / QphH @30000GB, availability 06/18/07, QphH, $/QphH tm of Transaction Processing Performance Council (TPC). More info www.tpc.org.

Monday Jun 15, 2009

Significance of Results

  • World Record performance result with 8 processors on the two-tier SAP ERP 6.0 enhancement pack 4 (unicode) standard sales and distribution (SD) benchmark as of June 10, 2009.
  • The Sun Fire X4600 M2 server with 8 AMD Opteron 8384 SE processors (32 cores, 32 threads) achieved 6,050 SAP SD Benchmark users running SAP ERP application release 6.0 enhancement pack 4 benchmark with unicode software, using MaxDB 7.8 database and Solaris 10 OS.
  • This benchmark result highlights the optimal performance of SAP ERP on Sun Fire servers running the Solaris OS and the seamless multilingual support available for systems running SAP applications.
  • ZFS is used in this benchmark for its database and log files.
  • The Sun Fire X4600 M2 server beats both the HP ProLiant DL785 G5 and the NEC Express5800 running Windows by 10% and 35% respectively even though all three systems use the same number of processors.
  • In January 2009, a new version, the Two-tier SAP ERP 6.0 Enhancement Pack 4 (Unicode) Standard Sales and Distribution (SD) Benchmark, was released. This new release has higher cpu requirements and so yields from 25-50% fewer users compared to the previous Two-tier SAP ERP 6.0 (non-unicode) Standard Sales and Distribution (SD) Benchmark. 10-30% of this is due to the extra overhead from the processing of the larger character strings due to Unicode encoding. Refer to SAP Note for more details (https://service.sap.com/sap/support/notes/1139642 Note: User and password for SAP Service Marketplace required).

  • Unicode is a computing standard that allows for the representation and manipulation of text expressed in most of the world's writing systems. Before the Unicode requirement, this benchmark used ASCII characters meaning each was just 1 byte. The new version of the benchmark requires Unicode characters and the Application layer (where ~90% of the cycles in this benchmark are spent) uses a new encoding, UTF-16, which uses 2 bytes to encode most characters (including all ASCII characters) and 4 bytes for some others. This requires computers to do more computation and use more bandwidth and storage for most character strings. Refer to the above SAP Note for more details.

Performance Landscape

SAP-SD 2-Tier Performance Table (in decreasing performance order).

SAP ERP 6.0 Enhancement Pack 4 (Unicode) Results
(New version of the benchmark as of January 2009)

System OS
Database
Users SAP
ERP/ECC
Release
SAPS SAPS/
Proc
Date
Sun Fire X4600 M2
8xAMD Opteron 8384 SE @2.7GHz
256 GB
Solaris 10
MaxDB 7.8
6,050 2009
6.0 EP4
(Unicode)
33,230 4,154 10-Jun-09
HP ProLiant DL785 G5
8xAMD Opteron 8393 SE @3.1GHz
128 GB
Windows Server 2008
Enterprise Edition
SQL Server 2008
5,518 2009
6.0 EP4
(Unicode)
30,180 3,772 24-Apr-09
NEC Express 5800
8xIntel Xeon X7460 @2.66GHz
256 GB
Windows Server 2008
Datacenter Edition
SQL Server 2008
4,485 2009
6.0 EP4
(Unicode)
25,280 12,640 09-Feb-09
Sun Fire X4270
2xIntel Xeon X5570 @2.93GHz
48 GB
Solaris 10
Oracle 10g
3,700 2009
6.0 EP4
(Unicode)
20,300 10,150 30-Mar-09

SAP ERP 6.0 (non-unicode) Results
(Old version of the benchmark retired at the end of 2008)

System OS
Database
Users SAP
ERP/ECC
Release
SAPS SAPS/
Proc
Date
Sun Fire X4600 M2
8xAMD Opteron 8384 @2.7GHz
128 GB
Solaris 10
MaxDB 7.6
7,825 2005
6.0
39,270 4,909 09-Dec-08
IBM System x3650
2xIntel Xeon X5570 @2.93GHz
48 GB
Windows Server 2003 EE
DB2 9.5
5,100 2005
6.0
25,530 12,765 19-Dec-08
HP ProLiant DL380 G6
2xIntel Xeon X5570 @2.93GHz
48 GB
Windows Server 2003 EE
SQL Server 2005
4,995 2005
6.0
25,000 12,500 15-Dec-08

Complete benchmark results may be found at the SAP benchmark website http://www.sap.com/benchmark.

Results and Configuration Summary

Hardware Configuration:

    One, Sun Fire X4600 M2
      8 x 2.7 GHz AMD Opteron 8384 SE processors (8 processors / 32 cores / 32 threads)
      256 GB memory
      3 x STK2540, 3 x STK2501 each with 12 x 146GB/15KRPM disks

Software Configuration:

    Solaris 10
    SAP ECC Release: 6.0 Enhancement Pack 4 (Unicode)
    MaxDB 7.8

Certified Results

    Performance:
    6050 benchmark users
    SAP Certification:
    2009022

Key Points and Best Practices

  • This is the best 8 Processor SAP ERP 6.0 EP4 (Unicode) result as of June 10, 2009.
  • Two-tier SAP ERP 6.0 Enhancement Pack 4 (Unicode) Standard Sales and Distribution (SD) Benchmark on Sun Fire X4600 M2 (8 processors, 32 cores, 32 threads, 8x2.7 GHz AMD Opteron 8384 SE) was able to support 6,050 SAP SD Users on top of the Solaris 10 OS.
  • Since random writes are an important part of this benchmark, we used zfs to help coalesce those into sequential writes.

Benchmark Description

The SAP Standard Application SD (Sales and Distribution) Benchmark is a two-tier ERP business test that is indicative of full business workloads of complete order processing and invoice processing, and demonstrates the ability to run both the application and database software on a single system. The SAP Standard Application SD Benchmark represents the critical tasks performed in real-world ERP business environments.

SAP is one of the premier world-wide ERP application providers, and maintains a suite of benchmark tests to demonstrate the performance of competitive systems on the various SAP products.

Disclosure Statement

Two-tier SAP Sales and Distribution (SD) standard SAP ERP 6.0 2005/EP4 (Unicode) application benchmarks as of 06/10/09: Sun Fire X4600 M2 (8 processors, 32 cores, 32 threads) 6,050 SAP SD Users, 8x 2.7 GHz AMD Opteron 8384 SE, 256 GB memory, MaxDB 7.8, Solaris 10, Cert# 2009022. HP ProLiant DL785 G5 (8 processors, 32 cores, 32 threads) 5,518 SAP SD Users, 8x 3.1 GHz AMD Opteron 8393 SE, 128 GB memory, SQL Server 2008, Windows Server 2008 Enterprise Edition, Cert# 2009009. NEC Express 5800 (8 processors, 48 cores, 48 threads) 4,485 SAP SD Users, 8x 2.66 GHz Intel Xeon X7460, 256 GB memory, SQL Server 2008, Windows Server 2008 Datacenter Edition, Cert# 2009001. Sun Fire X4270 (2 processors, 8 cores, 16 threads) 3,700 SAP SD Users, 2x 2.93 GHz Intel Xeon x5570, 48 GB memory, Oracle 10g, Solaris 10, Cert# 2009005. Sun Fire X4600M2 (8 processors, 32 cores, 32 threads) 7,825 SAP SD Users, 8x 2.7 GHz AMD Opteron 8384, 128 GB memory, MaxDB 7.6, Solaris 10, Cert# 2008070. IBM System x3650 M2 (2 Processors, 8 Cores, 16 Threads) 5,100 SAP SD users,2x 2.93 Ghz Intel Xeon X5570, DB2 9.5, Windows Server 2003 Enterprise Edition, Cert# 2008079. HP ProLiant DL380 G6 (2 processors, 8 cores, 16 threads) 4,995 SAP SD Users, 2x 2.93 GHz Intel Xeon x5570, 48 GB memory, SQL Server 2005, Windows Server 2003 Enterprise Edition, Cert# 2008071.

SAP, R/3, reg TM of SAP AG in Germany and other countries. More info www.sap.com/benchmark

Wednesday Jun 03, 2009

Welcome to BestPerf group blog! This blog will contain many different performance results and the best practices learned from doing a wide variety of performance work on the broad range of Sun's products.

Over the coming days, you will see many engineers in the Strategic Applications Engineering group posting a wide variety topics and providing useful information to the users of Sun's technologies. Some of the areas explored will be:

world-record, performance, $/Perf, watts, watt/perf, scalability, bandwidth, RAS, virtualization, security, cluster, latency, HPC, Web, Application, Database

This blog copyright 2009 by John Henning