|
August 2017 |
System Pricelists effective 04 February 2026:
- Price rises are mainly due to global shortage of MOTHERBOARDS, ram and nand.
- This round of rise affects Diamond+, Thunderbird, DX, SX5, and SX3 servers.
- This round of price rise is mild for desktop models and moderate for server models.
- More system price rises are reflected in components such as DDR4 for desktop and ECC for server.
- Price rises and shortages are global and are expected to stay in 2026..
- Owing to the tensions of the global supply chain, we are not sure how long we can hold onto this latest pricelist update.
|
|
|
August 2017 |

Specifications
| Product Name |
SLIM INTERNAL DVD/CD ROM |
| DVD Write Speed |
DVD+R PCAV 8X maximum
DVD-R PCAV 8X maximum
DVD+R9 PCAV 6X maximum
DVD-R9 PCAV 6X maximum |
| DVD Read Speed |
DVD-ROM CAV 8X maximum
|
| DVD ReWrite Speed |
DVD+RW ZCLV 8X maximum
DVD-RW ZCLV 6X maximum |
| DVD Random Access Time |
350ms |
| CD Write Speed |
CD-R PCAV 24X maximum |
| CD Read Speed |
CD-ROM CAV 24X maximum
|
| CD ReWrite Speed |
CD-RW ZCLV 2X maximum |
| CD Random Access Time |
330ms |
| MTBF(Life) |
60000 POH |
| Environment |
Operating: 5°c to 50°c; Relative Humidity 10% to 80%
Non-Operating: -40°c to 65°c; Relative Humidity 10% to 90% |
| Dimensions |
128(W) x 12.7(H) x 126.1(D) mm |
| Weight |
170g Max |
Click here to return
SDVD-ROM
|
|
|
August 2017 |
There are 3 cache settings:
- Read cache for each logical drive (RAID array)
- Write cache for each logical drive (RAID array)
- Write cache for each physical disk (on each HDD or SSD)
The cache settings can be changed at any time without affecting your server's operation, and does not need a system restart for the settings to take effect.
Read cache for each logical drive (RAID array)When read caching is enabled, the controller monitors read access to a logical drive and, if it sees a pattern, pre-loads the cache with data that seems most likely to be read next, thereby improving performance. You can set the Read Cache to:
- Enabled—The controller transfers data from the logical drive to its local cache in portions equal to the stripe size. Use this setting for the best performance when workloads are steady and sequential.
- Disabled—The controller transfers data from the logical drive to its local cache in portions equal to the system I/O request size. Use this setting for the best performance when workloads are random or the system I/O requests are smaller than the stripe size.
To quickly change the read cache setting:
- In the Enterprise View, select a controller, then select a logical drive on that controller.
- On the ribbon, in the Logical Device group, click Set Properties. The Set Properties window opens.
- In the Read Cache drop-down list, select Enabled or Disabled, as needed.
- Click OK.
Write cache for each logical drive (RAID array)
The write cache setting determines when data is stored on a disk drive and when the controller communicates with the operating system. You can set the Write Cache to:
- Disabled (write-through)—The controller sends (or writes) the data to a disk drive, then sends confirmation to the operating system that the data was received. Use this setting when performance is less important than data protection.
- Enabled (write-back)—The controller sends confirmation to the operating system that the data was received, then writes the data to a disk drive. Use this setting when performance is more important than data protection and you aren't using a battery-backup cache or zero-maintenance cache protection module.
- Enabled (write-back) when protected by battery/ZMM—Similar to Enabled (write-back), but used when the controller is protected by a zero-maintenance cache protection module (#3AFM700).
Note: (RAID 10, 50, and 60 only) All logical drives comprising a RAID 10/50/60 logical device must have the same write cache setting—either all write-through or all write-back.
To quickly change the write cache setting:
- In the Enterprise View, select a controller, then select a logical drive on that controller.
- On the ribbon, in the Logical Device group, click Set Properties. The Set Properties window opens.
- In the Write Cache drop-down list, select Enabled, Enabled when protected by battery/ZMM, or Disabled, as needed.
- Click OK.
Write Cache Policy for an Individual Drive
Note: You can change the write cache setting for an individual drive only if the Global Write Cache Policy is set to "Drive Specific."
By default, disk drive write caching is disabled in maxView Storage Manager. To enable or disable write caching on an individual drive:
- In the Enterprise View, select a controller then, in the Physical Devices tree, select a disk drive.
- On the ribbon, in the Physical Device group, click Set Properties. The Set Properties window opens.
- In the Write-back Cache drop-down list, select Enabled or Disabled.
- Click OK.
|
|
|
August 2017 |
Introduction
Back in April 2010 we published a set of guidelines describing and explaining what cache settings are recommended based on whether a working UPS (Uninterruptible Power Supply) or whether a Cache Protection module (BBWC or ZMCP) is present. We presented 3 different sets of settings to achieve the best data integrity, or the performance, or a balance of both. Please see Best Practices: Controller/HDD write-cache settings if you are not familiar with those cache settings.
Since then the use of SSDs (Solid State Drives) have become more and more prevalent, now to the point where the use of standard HDDs in servers are increasingly rare. Oddly, Adaptec (now Microsemi) have suggested that all caching should be disabled when an array is made of all SSD drives. We were curious to find out more as it contradicts their suggestion to have cache enabled for arrays made of all HDD / mechanical drives. We wrote to Adaptec for a more explicit answer and this was their reply:
"For the best performance we recommend all cache enabled. If array is made from all SSD drives, then we recommend all cache disable. However, if you don't have a UPS or the AFM700 installed then to ensure data protection you will need to disable write cache. Read cache is ok to have it enable. If you have AFM or UPS then enable all cache for best performance."
As we were not 100% confident with the advice above we decided to investigate by running some benchmarks of our own.
The Benchmark
Hardware:
- Compucon Workgroup SX E3 server
- Intel Core i3 7100 processor (2C/2T, 3.9GHz, 3M cache)
- 8GB ECC memory (DDR4-2400 Single Channel)
- Microsemi Adaptec 8405 RAID controller
- ADATA SX950 240GB SSD x3
Drivers/Firmware:
- X11SSM-F BIOS revision 2.0a
- Adaptec 8405 firmware v7.11.0 build 33173
- Adaptec 8405 driver v7.5.0.54013
- ADATA SX950 firmware level 5A
Software:
- Microsoft Windows Server 2016
- Passmark Performance Test v8.0 build1054
- Crystal Disk Mark v5.1.2 x64
- Anvil's Storage Utilities v1.1.0.337
Software settings:
- Performance Test - Defaults (200MB test file size, block size 16KB)
- Crystal Disk Mark - Defaults (1GiB test file size, average of 5 test runs for each test)
- Anvil's Storage Utilities - 1GB test file size, 8% data compressibility
Array configurations:
- RAID 0 of 3 x 240GB disks
- RAID 1 of 2 x 240GB disks
- RAID 5 of 3 x 240GB disks
Cache settings:
- Logical device controller read cache [ON | OFF], hereafter referred to as Array Read cache;
- Logical device controller write cache [Write-back or ON | Write-through of OFF], hereafter referred to as Array Write cache;
- Physical device write cache [Write-back or ON | Write-through or OFF], hereafter referred to as Disk Write cache.
Configuration #
|
Configuration
|
Array Read cache |
Array Write cache |
Disk Write cache |
| 1 |
111 |
ON |
ON |
ON |
| 2 |
110 |
ON |
ON |
OFF |
| 3 |
001 |
OFF |
OFF |
ON |
| 4 |
000 |
OFF |
OFF |
OFF |
| 5 |
100 |
ON |
OFF |
OFF |
| 6 |
101 |
ON |
OFF |
ON |
| 7 |
011 |
OFF |
ON |
ON |
| 8 |
010 |
OFF |
ON |
OFF |
Performance Test Results
This benchmark application contains a number of tests that exercise the mass storage (the RAID array in this case) connected to the computer. The C: drive is used for the test, where the OS, applications, and test data all reside.
The test file size is 200MB and the read or write block sizes used are 16KB. Each test uses uncached asynchronous file operations (with an IO queue length of 20), and each test runs for at least 20 seconds.
Sequential Read: A large test file is created on the disk under test. The file is read sequentially from start to end.
Sequential Write: A large file is written to the disk under test. The file is written sequentially from start to end.
Random RW: A large test file is created on the disk under test. The file is read randomly; a seek is performed to move the file pointer to a random position in the file, a 16KB block is read or written then another seek is performed. The amount of data actually transferred is highly dependent on the disk seek time.
Configuration #
|
Configuration
|
Array Read cache |
Array Write cache |
Disk Write cache |
| 1 |
111 |
ON |
ON |
ON |
| 2 |
110 |
ON |
ON |
OFF |
| 3 |
001 |
OFF |
OFF |
ON |
| 4 |
000 |
OFF |
OFF |
OFF |
| 5 |
100 |
ON |
OFF |
OFF |
| 6 |
101 |
ON |
OFF |
ON |
| 7 |
011 |
OFF |
ON |
ON |
| 8 |
010 |
OFF |
ON |
OFF |
RAID 0
|
|
|
|
|
|
|
|
Configuration
|
Sequential Read |
Sequential Write |
Random RW |
|
Sequential Read |
Sequential Write |
Random RW |
| #1 (111) |
2308 |
2387 |
2342 |
|
100% |
100% |
100% |
#2 (110)
|
2562 |
2548 |
2178 |
|
111% |
107% |
93% |
#3 (001)
|
1237 |
532 |
674 |
|
54% |
22% |
29% |
#4 (000)
|
1274 |
80 |
182 |
|
55% |
3% |
8% |
#5 (100)
|
1878 |
178 |
130 |
|
81% |
7% |
6% |
| |
|
|
|
|
|
|
|
RAID 1
|
|
|
|
|
|
|
|
Configuration
|
Sequential Read |
Sequential Write |
Random RW |
|
Sequential Read |
Sequential Write |
Random RW |
| #1 (111) |
1902 |
2436 |
2072 |
|
100% |
100% |
100% |
#2 (110)
|
1890 |
1873 |
2358 |
|
99% |
77% |
114% |
#3 (001)
|
482 |
319 |
236 |
|
25% |
13% |
11% |
#4 (000)
|
471 |
35 |
53 |
|
25% |
1% |
3% |
#5 (100)
|
1954 |
133 |
70 |
|
103% |
5% |
3% |
#6 (101)
|
1917 |
299 |
240 |
|
101% |
12% |
12% |
#7 (011)
|
2589 |
2397 |
2422 |
|
136% |
98% |
117% |
#8 (010)
|
2642 |
2823 |
2467 |
|
139% |
116% |
119% |
| |
|
|
|
|
|
|
|
RAID 5
|
|
|
|
|
|
|
|
Configuration
|
Sequential Read |
Sequential Write |
Random RW |
|
Sequential Read |
Sequential Write |
Random RW |
| #1 (111) |
2595 |
1876 |
2367 |
|
100% |
100% |
100% |
#2 (110)
|
2497 |
1902 |
2093 |
|
96% |
101% |
88% |
#3 (001)
|
1159 |
126 |
315 |
|
45% |
7% |
13% |
#4 (000)
|
1144 |
85 |
71 |
|
44% |
5% |
3% |
#5 (100)
|
3085 |
115 |
85 |
|
119% |
6% |
4% |
UPDATE: Having completed all the benchmarks we reviewed the results and found the figures presented by Performance Test unusual. We ran the tests again on the RAID 1 configuration two more times on configuration #1 (111), and an additional time on configurations #2 to #4 (110, 001, 000):
RAID 1
Configuration
|
Sequential Read |
Sequential Write |
Random RW |
|
Sequential Read |
Sequential Write |
Random RW |
| 111 |
1902 |
2436 |
2072 |
|
100% |
100% |
100% |
| (run 2) 111 |
2856 |
3003 |
2434 |
|
150% |
123% |
117% |
(run 3) 111
|
2933 |
2105 |
2067 |
|
154% |
86% |
100% |
110
|
1890 |
1873 |
2358 |
|
99% |
77% |
114% |
(run 2) 110
|
2240 |
2039 |
2423 |
|
118% |
84% |
117% |
001
|
482 |
319 |
236 |
|
25% |
13% |
11% |
(run 2) 001
|
470 |
30 |
55 |
|
25% |
1% |
3% |
000
|
471 |
35 |
53 |
|
25% |
1% |
3% |
(run 2) 000
|
482 |
55 |
289 |
|
25% |
2% |
14% |
100
|
1954 |
133 |
70 |
|
103% |
5% |
3% |
101
|
1917 |
299 |
240 |
|
101% |
12% |
12% |
011
|
2589 |
2397 |
2422 |
|
136% |
98% |
117% |
010
|
2642 |
2823 |
2467 |
|
139% |
116% |
119% |
Unfortunately it appears that Performance Test is influenced by other factors and at run time they can present wildly different figures with each run.
Quick Analysis and Comments
- The Sequential Write test
performs well when the Array Write cache is turned on (x1x). With the test file size being 200MB in
size only, the data set can comfortably fit inside the RAID controller's
cache (1GB in size) and we do not need to wait for the data to be written to the physical disks.
- There are no obvious patterns for the controller read cache setting, likely due to the lack of 'hot data' for the controller to store in the cache for frequent reading purposes. Hot data is data that is accessed and read repeatedly.
- Configurations #7 and #8 (011 and 010) looked good initially, but with the addition of the second and third runs for configuration #1 their appeal is now questionable. To gain any further insight we may need to run a lot more tests if we were to rely on Performance Test alone. We can place more emphasis on results from the other two benchmark applications as an alternative.
Crystal Disk Mark ResultsThis disk benchmark uses DISKSPD, an open source storage load generator / performance test tool from Microsoft's Windows Server and Cloud Server Infrastructure Engineering teams.
The size of the data set is 1GiB and each test is run 5 times
with the result presented as an average value. The figures have been
rounded to the nearest integer for better viewability. The pecentages
table that follows each result is for an easier comparison of relative
performance, where Configuration 1 (all caching turned on) has been set
as the baseline at 100%.
Read Q = Sequential Read with Queue Depth of 32 in a single thread (Block Size = 128KiB)
Write Q = Sequential Write with Queue Depth of 32 in a single thread (Block Size = 128KiB)
Random Read Q = Random Read with Queue Depth of 32 in a single thread (Block Size = 4KiB)
Random Write Q = Random Write with Queue Depth of 32 in a single thread (Block Size = 4KiB)
Read = Sequential Read with Queue Depth of 1 in a single thread (Block Size = 1MiB)
Write = Sequential Write with Queue Depth of 1 in a single thread (Block Size = 1MiB)
Random Read = Random Read with Queue Depth of 1 in a single thread (Block Size = 4KiB)
Random Write = Random Write with Queue Depth of 1 in a single thread (Block Size = 4KiB)
All results are expressed in MB/s.
Configuration #
|
Configuration
|
Array Read cache |
Array Write cache |
Disk Write cache |
| 1 |
111 |
ON |
ON |
ON |
| 2 |
110 |
ON |
ON |
OFF |
| 3 |
001 |
OFF |
OFF |
ON |
| 4 |
000 |
OFF |
OFF |
OFF |
| 5 |
100 |
ON |
OFF |
OFF |
| 6 |
101 |
ON |
OFF |
ON |
| 7 |
011 |
OFF |
ON |
ON |
| 8 |
010 |
OFF |
ON |
OFF |
RAID 0
| |
Read Q |
Write Q |
Random Read Q |
Random Write Q |
Read
|
Write
|
Random Read |
Random Write |
#1 (111)
|
1009 |
1495 |
435 |
79 |
937 |
1471 |
48 |
65 |
#2 (110)
|
865 |
624 |
438 |
21 |
901 |
594 |
48 |
19 |
#3 (001)
|
1555 |
1367 |
545 |
497 |
971 |
896 |
27 |
61 |
#4 (000)
|
1553 |
314 |
548 |
19 |
952 |
304 |
27 |
5 |
#5 (100)
|
879 |
345 |
427 |
19 |
661 |
230 |
49 |
5 |
| |
Read Q |
Write Q |
Random Read Q |
Random Write Q |
Read |
Write |
Random Read |
Random Write |
#1 (111)
|
100% |
100% |
100% |
100% |
100% |
100% |
100% |
100% |
#2 (110)
|
86% |
42% |
101% |
27% |
96% |
40% |
100% |
30% |
#3 (001)
|
154% |
91% |
125% |
632% |
104% |
61% |
56% |
93% |
#4 (000)
|
154% |
21% |
126% |
24% |
102% |
21% |
56% |
8% |
| #5 (100) |
87% |
23% |
98% |
24% |
71% |
16% |
100% |
7% |
RAID 1
| |
Read Q |
Write Q |
Random Read Q |
Random Write Q |
Read |
Write |
Random Read |
Random Write |
| #1 (111) |
629 |
543 |
358 |
34 |
496 |
559 |
49 |
38 |
#2 (110)
|
650 |
284 |
339 |
8 |
454 |
216 |
49 |
8 |
#3 (001)
|
608 |
472 |
392 |
195 |
462 |
412 |
27 |
58 |
#4 (000)
|
597 |
139 |
345 |
6 |
472 |
180 |
27 |
5 |
| #5 (100) |
684 |
191 |
347 |
7 |
473 |
146 |
49 |
4 |
#6 (101)
|
628 |
489 |
347 |
196 |
504 |
406 |
49 |
50 |
#7 (011)
|
1695 |
497 |
541 |
35 |
167 |
559 |
47 |
36 |
#8 (010)
|
1549 |
283 |
574 |
8 |
215 |
231 |
47 |
8 |
| |
Read Q |
Write Q |
Random Read Q |
Random Write Q |
Read |
Write |
Random Read |
Random Write |
| #1 (111) |
100% |
100% |
100% |
100% |
100% |
100% |
100% |
100% |
#2 (110)
|
103% |
52% |
95% |
23% |
91% |
39% |
99% |
21% |
#3 (001)
|
97% |
87% |
109% |
574% |
93% |
74% |
56% |
152% |
#4 (000)
|
95% |
26% |
96% |
18% |
95% |
32% |
56% |
12% |
#5 (100)
|
109% |
35% |
97% |
19% |
95% |
26% |
99% |
12% |
#6 (101)
|
100% |
90% |
97% |
575% |
102% |
73% |
99% |
131% |
#7 (011)
|
269% |
91% |
151% |
103% |
34% |
100% |
95% |
94% |
#8 (010)
|
246% |
52% |
160% |
24% |
43% |
41% |
96% |
21% |
RAID 5
| |
Read Q |
Write Q |
Random Read Q |
Random Write Q |
Read |
Write |
Random Read |
Random Write |
| #1 (111) |
906 |
947 |
426 |
12 |
685 |
908 |
49 |
11 |
#2 (110)
|
911 |
364 |
409 |
9 |
667 |
9 |
49 |
8 |
#3 (001)
|
1462 |
379 |
545 |
99 |
671 |
164 |
27 |
13 |
#4 (000)
|
1454 |
371 |
543 |
8 |
664 |
98 |
28 |
4 |
#5 (100)
|
976 |
363 |
414 |
8 |
681 |
119 |
48 |
4 |
| |
Read Q |
Write Q |
Random Read Q |
Random Write Q |
Read |
Write |
Random Read |
Random Write |
| #1 (111) |
100% |
100% |
100% |
100% |
100% |
100% |
100% |
100% |
#2 (110)
|
101% |
38% |
96% |
77% |
97% |
1% |
100% |
74% |
#3 (001)
|
161% |
40% |
128% |
838% |
98% |
18% |
55% |
117% |
#4 (000)
|
161% |
39% |
128% |
64% |
97% |
11% |
56% |
34% |
#5 (100)
|
108% |
38% |
97% |
70% |
99% |
13% |
98% |
34% |
Quick Analysis and Comments
- The test file size is 1 Gibibyte in size and can therefore not fit in the RAID controller's cache. The results are therefore not affected by the controller cache anywhere near as much as in Performance Test.
- Leaving all caching ON (configuration #1, 111) appears to be an optimal and balanced approach for systems that require both sequential and random read/write patterns.
- Turning off the array write cache (configuration #3 and #6, 001 and 101) are viable options as it increases random write performance significantly at the expense of sequential write performance. Servers that are serving multiple users and databases most of the time will benefit. Configuration #6 (101) is more appealing as the array read cache lifts its random read performance (whereas configuration #3 suffered).
- As the benchmark runs every test 5 times and then presents an
average score, the results are more reliable and reproducible. We ran
additional test runs and can confirm this point.
Anvil's Storage Utilities ResultsSimilar to Crystal Disk Mark this software can benchmark logical drives and present sequential and random performance for both read and writes with varying levels of queue depths. It provides yet another view on the storage subsystem's performance.
Test data set size is set to 1GB. Compression is set to 8% database. The
figures have been rounded up to the nearest integer for better
viewability. The pecentages on the right is for an easier comparison of
relative performance, where Configuration 1 (all caching turned on) has
been set as the baseline at 100%.
Read = Sequential Read (Block Size = 4MB)
Random Read = Random Read (Block Size = 4KiB)
Random Read Q = Random Read with Queue Depth 16 (Block Size = 4KiB)
Write = Sequential Write (Block Size = 4MB)
Random Write = Random Write (Block Size = 4KiB)
Random Write Q = Random Write with Queue Depth 16 (Block Size = 4KiB)
All results are expressed in MB/s.
Configuration #
|
Configuration
|
Array Read cache |
Array Write cache |
Disk Write cache |
| 1 |
111 |
ON |
ON |
ON |
| 2 |
110 |
ON |
ON |
OFF |
| 3 |
001 |
OFF |
OFF |
ON |
| 4 |
000 |
OFF |
OFF |
OFF |
| 5 |
100 |
ON |
OFF |
OFF |
| 6 |
101 |
ON |
OFF |
ON |
| 7 |
011 |
OFF |
ON |
ON |
| 8 |
010 |
OFF |
ON |
OFF |
RAID 0
Configuration
|
Read |
Random Read |
Random Read Q |
Write |
Random Write |
Random Write Q |
|
Read |
Random Read |
Random Read Q |
Write |
Random Write |
Random Write Q |
| #1 (111) |
1507 |
36 |
277 |
2048 |
51 |
85 |
|
100% |
100% |
100% |
100% |
100% |
100% |
#2 (110)
|
1599 |
35 |
271 |
1009 |
11 |
10 |
|
106% |
97% |
98% |
49% |
22% |
11% |
#3 (001)
|
1248 |
24 |
278 |
1111 |
48 |
440 |
|
83% |
66% |
100% |
54% |
94% |
517% |
#4 (000)
|
1324 |
38 |
267 |
385 |
4 |
11 |
|
88% |
105% |
96% |
19% |
8% |
12% |
#5 (100)
|
1170 |
38 |
270 |
392 |
4 |
12 |
|
78% |
103% |
97% |
19% |
8% |
14% |
RAID 1
Configuration
|
Read |
Random Read |
Random Read Q |
Write |
Random Write |
Random Write Q |
|
Read |
Random Read |
Random Read Q |
Write |
Random Write |
Random Write Q |
| #1 (111) |
1032 |
23 |
225 |
771 |
19 |
23 |
|
100% |
100% |
100% |
100% |
100% |
100% |
| #2 (110) |
904 |
30 |
226 |
354 |
4 |
7 |
|
88% |
131% |
101% |
46% |
21% |
29% |
| #3 (001)
|
485 |
24 |
250 |
420 |
46 |
82 |
|
47% |
103% |
111% |
54% |
239% |
355% |
| #4 (000)
|
487 |
24 |
250 |
165 |
4 |
4 |
|
47% |
103% |
111% |
21% |
22% |
19% |
| #5 (100)
|
716 |
37 |
228 |
162 |
3 |
4 |
|
69% |
157% |
101% |
21% |
13% |
16% |
#6 (101)
|
753 |
37 |
232 |
415 |
41 |
85 |
|
73% |
157% |
103% |
54% |
213% |
368% |
#7 (011)
|
790 |
39 |
416 |
799 |
39 |
25 |
|
77% |
166% |
185% |
104% |
200% |
107% |
#8 (010)
|
862 |
32 |
417 |
349 |
7 |
8 |
|
84% |
137% |
185% |
45% |
38% |
36% |
RAID 5
Configuration
|
Read |
Random Read |
Random Read Q |
Write |
Random Write |
Random Write Q |
|
Read |
Random Read |
Random Read Q |
Write |
Random Write |
Random Write Q |
| #1 (111) |
1338 |
37 |
269 |
1285 |
16 |
22 |
|
100% |
100% |
100% |
100% |
100% |
100% |
| #2 (110) |
1285 |
35 |
263 |
607 |
6 |
8 |
|
96% |
93% |
98% |
47% |
36% |
36% |
| #3 (001) |
898 |
24 |
277 |
428 |
12 |
81 |
|
67% |
64% |
103% |
33% |
76% |
372% |
| #4 (000) |
868 |
22 |
281 |
128 |
3 |
6 |
|
65% |
59% |
105% |
10% |
21% |
25% |
| #5 (100) |
930 |
37 |
263 |
112 |
3 |
5 |
|
69% |
100% |
98% |
9% |
21% |
25% |
Quick Analysis and Comments
- The results are quite similar to the results produced by Crystal Disk Mark:
- The test file size is 1
Gigabyte in size and therefore can not fit in the RAID controller's
cache. The results are therefore not affected by the controller cache
anywhere near as much as in Performance Test.
- Leaving all caching ON (configuration #1, 111)
appears to be an optimal and balanced approach for systems that require
both sequential and random read/write patterns.
- Turning off the
array write cache (configuration #3 and #6, 001 and 101) are viable options as it
increases random write performance significantly at the expense of
sequential write performance. Servers that are serving multiple users
and databases most of the time will benefit. Configuration #6 is more
appealing as the array read cache lifts its random read performance
(whereas configuration #3 suffered).
- Configuration 7 with all caching turned on except array read cache produces the best result if and when sequential read performance is not important. A possible theory to explain this phenomenon is that the controller no longer needs to spend processing cycles to recognise what data is hot and therefore should be kept in cache. On the other hand, this being a synthetic benchmark means an array read cache may not be as useful here and may not reflect performance in a production environment.
Conclusion
In general:
- We highly recommend all servers to be protected by a UPS at all times. The batteries within a UPS have a finite lifetime and should be tested as often as practically possible to ensure the UPS will actually work in the event of an actual power failure. Good preventitive maintenance would be to replace the batteries at least every two years.
- We recommend all caching to be turned on for optimal performance:
- Logical device controller read cache [ON]
- Logical device controller write cache [Write-back]
- Physical device write cache [Write-back]
- For utmost data integrity, installing a CP module (cache protection module, also known as BBWC or ZMCP) will help when the physical device write caches are turned OFF (Write-through). This protects any data from being lost in the event of an AC power failure at the wall socket, a faulty UPS, or a power supply unit hardware failure within your computer system. Write performance will suffer (see configuration #2 in the benchmark results).
We did not find any evidence to suggest that following Adaptec's advice of disabling all caching (for arrays made up of all SSD drives) to be a good idea. While some of our benchmarking revealed potentially good cache settings for particular data storage patterns (such as for heavy random write performance), the synthetic tests were likely unable to make use of the controller's read cache as most of the testing had data that is randomly generated at runtime.
Last but not least, we understand that a server's storage performance is most noticeable (witnessed by human) when large files are being copied, or when we look at how long it takes a backup job to be completed. In these cases we also believe leaving all caches on to be the most optimal arrangement.
For advice on how to change cache settings, please see Changing Read and Write Cache settings.
END
|
|
|