I'm not sure what you're showing? Your write seek in that link indicated 0.21 msec, which is slower than the write speed I posted. Your read speed is 0.09, whereas mine is 0.15. In absolute time terms, it's inconsequential.
Are you ready to read a whole big pile of nerdom to answer this question? Because you can either skip the next umpteen paragraphs and take my word for it, or you can ignore me and assume I'm just some jerk, or you can read a crap-ton and hopefully understand why it doesn't work the way you hope it would...
The reason AS-SSD isn't a good seek tool is partially it's own fault, and partially an issue with the Windows OS.
It's easiest to explain when we start with Windows' deficiencies: by default, Windows can only give you time in 15.6msec increments. Meaning, if you write an application that does nothing but pipe out the time down to the millisecond, you'll see it move in 15.6msec chunks like this:
<this continues for a bit...>
<you get the idea...>
... and so on.
Why does Windows do this? There are lots of reasons: a lot of it is backwards compatibility that reaches back when PC hardware timers weren't good enough to provide that kind of resolution, other reasons include power usage, and then nebulous pragmatic things like guaranteed event delivery time slices (thread quanta) and the like.
There are ways that you can "ask" Windows to increase this timer resolution; the minimum value under the NT6 kernel (Vista, Server 2k8 / 2k8r2, Windows 7) is 0.5msec give or take. As you might expect, AS-SSD does ask for the increased timer resolution, but only asks for 1msec resolution. Why? Because that's the minimum that XP can support, and they need XP compatibility.
How can someone figure this out? Simply by writing about three lines of code to expose the timer resolution setting from Windows. Then, run the code while AS-SSD is performing the access time section of the test and you'll see the resolution move to 1msec -- and then back to whatever it was beforehand when the seek time test is complete.
We have to leave Windows' deficiencies for a minute, and go talk about AS-SSD again for a second: how is it that AS-SSD is telling us about seek times down to the thousandths of a millisecond
if Windows can't measure it that tightly? I mean, we're talking four orders of magnitude smaller than Windows is capable of delivering -- that seems somehow wrong, right?
The answer is that AS-SSD sends out a LOT of seeks, and then aggregates the entire set of seeks into a single chunk of time. So, if I send out 1000 seeks all in a row and then only measure the time from the first to the last, I can then divide (X) Seeks by (Y) Milliseconds and tell you the average time for a single seek. Most people know this by default, and it makes sense on the surface. In order to understand why it doesn't exactly work like that, we now have to go back to Windows again...
Under NT5 (XP, Server 2003, Server 2000, Windows 2000) and earlier, hardware I/O streams had no concept of priority, preemption, or coalescing. This is because I/O (whether it was serial, network, or disk) was always driven by a single FIFO-like ring-0 kernel thread that then handed the data stream "up the stack" to the application residing in user space. Because it was a kernel thread, it was allowed to run rampant all over the entire system; you could easily and plainly see this in usage when disk was suddenly VERY busy and the entire computer slows to a crawl.
I mentioned coalescing too, and hinted at the FIFO-like (first-in, first-out if you are unsure of the term) behavior of that I/O thread. Because there was no concept of priority, Windows could only assume that all threads were equally important and none could wait. Thus, if you have 50 disk page write requests that were intermingled with 50 network packet sends, then Windows would handle them exactly in the order that they were received. Thus, it would context switch itself to death while writing to disk, then sending a packet, then writing to desk, then sending a packet, then writing to disk, then sending a packet, blah-de-blah.
This was actually a very nasty problem for Microsoft in the enterprise, especially an enterprise that used fiberchannel-connected SAN devices. If you completely hammered a Server 2000 or Server 2003 with simultaneous epic disk and and network requests, you could almost entirely STOP the whole server as processor 0 would spike to 100% utilization because of the I/O thread locking everything else out. As a stop-gap, Microsoft released the Scalable Network Pack for Server 2003 that allowed several ways to prevent this -- TCP Chimney offload, and more importantly, the multithreading of I/O streams.
A happy side-benefit of that massive oversight on Microsoft's part was that disk I/O "got done" on NT5 operating systems without anyone being capable of interrupting. The problem was that nobody could interrupt
In NT6, as part of the major kernel architectural overhaul, Microsoft now allows for the threading, prioritization, preempting and coalescing of I/O requests. Every time you hear a fanboy gripe about how Microsoft totally could've put DX10 into XP, this is one of the fundamental reasons why Microsoft couldn't
. This new handling of I/O streams extends to all I/O interfaces, to include the obvious disk and network, but the less-obvious serial, printer, and even I/O to offboard memory pools like your video card, fiberchannel and RAID controllers.
A "normal" priority I/O thread (this I/O thread and priority are not linked to the process priority that you see in Task Manager -- but you CAN see the I/O thread priority in Resource Monitor) can now be coalesced into larger bunches of work, which can then be dispatched in the order that the kernel best sees fit. That I/O thread can also be preempted by other I/O with higher priority, or could be context-switched back into another CPU core to complete the work.
I know, I know, now you're like: WTF is this dude ranting about? DX10 for God's sake? Can't he get to the point?
The point isn't DX10, the point is how NT6 now handles I/O and thread event coalescing directly affects how applications like AS-SSD must be designed in order to do what you expect them to. AS-SSD I/O threading is not configured for high-priority traffic (mostly for backwards compatibility reasons with XP), which means that I/O threads can and will
be coalesced into "bunches" of work, threaded out, and then committed when the kernel allows them to be.
By very definition of "normal" priority, this will not be in realtime. As such, when you are working with an incredibly fast interface such as an SSD in a "normal"-priority I/O thread, an NT5 operating system (XP) is always going to show faster access times than an NT6 operating system (WinVista / Windows 7.) Does that mean that NT6 is a slower OS? Nope, actually quite the opposite: if you tell NT6 that an I/O stream is of high or realtime priority, an NT6 operating system will completely crush the timing of an NT5 operating system.
It also means that individual AS-SSD runs on an NT6 operating system will show variance that you cannot directly account for; this variance is the kernel management of your I/O thread rather than anything spurious. This is also why your read/write speeds are never the same twice either, as ANY disk I/O is going to have the same path traversal management.
In order to provide the absolute most accurate results, AS-SSD needs to do several things. Timer resolution needs to be OS dependent: NT6 operating systems should go straight to a 0.5msec timer resolution; NT5 can stay on 1.0msec. I/O thread creation needs to be generated at the highest priority, which also means that the parent application thread should be at the highest priority too.
The absolute best way to do is isn't in software, but actually to have a hardware disk controller do the measurements and report back. Then you completely avoid the Windows kernel stack, which means your results would be damn-near identical for every run.
Hope you enjoyed the read