2012年10月8日 星期一

GDB script

http://wiki.csie.ncku.edu.tw/embedded/schedule

很有趣的lab...  by jserv

1.用qemu模擬某個板子
2.寫個app去call driver API讓 UART 顯示 hello
3.看似簡單,但實際要比賽誰的app寫的短…(處理動作最好全權交給 GDB)
4.方法: 透過 gdb 來追蹤程式 




2012年7月3日 星期二

"轉" SSD的初使化 (用hdpara去erase整個SSD)




passing the block ranges you want to TRIM instead of start and count and the SSD device in place of /dev/sda. It has the advantage of being fast and not writing zeros on the drive. Rather, it simply sends TRIM commands to the SSD controller letting it know that you don't care about the data in those blocks and it can freely assume they are unused in its garbage collection algorithm.
You probably need to run this command as root. Since this command is extremely dangerous, as it can immediately cause major data loss, you also need to pass --please-destroy-my-drive argument tohdparm (I haven't added this to the command line to prevent accidental data loss caused by copy and paste.)
In the above command line, /dev/sda should be replaced with the SSD device you want to send TRIM commands to. start is the address of the first block (sector) to TRIM, and count is the number of blocks to mark as free from that starting address. You can pass multiple ranges to the command.
Having personally done it with hdparm v9.32 on Ubuntu 11.04 on my laptop with a 128GB Crucial RealSSD C300, I have to point out an issue: I was not able to pass the total number of disk blocks (0:250069680) as the range. I manually (essentially "binary searched" by hand) found a large enough value for block count that worked (40000) and was able to issue TRIM commands on a sequence of 40000 ranges to free up the entire disk. It's possible to do so with a simple shell script like this (tested on Ubuntu 11.04 under root):
 # fdisk -lu /dev/sda

 Disk /dev/sda: 128.0 GB, 128035676160 bytes
 255 heads, 63 sectors/track, 15566 cylinders, total 250069680 sectors
 ...  
to erase the entire drive, take that total number of sectors and replace 250069680 in the following line with that number and run (add --please-destroy-my-drive):
 # i=0; while [ $i -lt 250069680 ]; do echo $i:40000; i=$(((i+40000))); done \
 | hdparm --trim-sector-ranges-stdin /dev/sda
And you're done! You can try reading the raw contents of the disk with hexedit /dev/sda before and after and verify that the drive has discarded the data.

Of course, even if you don't want to use Linux as the primary OS of the machine, you can leverage this trick by booting off a live CD and running it on the drive.

2012年6月12日 星期二

ioMemory virtual storage layer

http://www.fusionio.com/overviews/vsl-technical-overview/ 轉


 有了virtual memory subsystem,當main meory不足夠的時候可以把某些 "page” swap出到secondary storage,
通常是HDD; 如今SSD開始進攻,此 subsystem也該做一下最佳化了吧~~

yes, fusion IO did it as followed:

@ 原本的page table 變成 block table 。
另外這個translation的動作也從 device 移到 host上去做。

---------片段----

With the addition of VSL, the Fusion ioMemory architecture now brings the full disruptive potential of solid state memory to the enterprise.

A HYBRID ARCHITECTURE

Fusion's VSL is a flash-based subsystem to accelerate today's enterprise-class operating systems. It virtualizes NAND flash arrays, combining key elements of the two pillars of modern operating systems: the I/O subsystem and the virtual memory subsystem.
VSL combines the advantages of a virtual memory architecture with a transactional file system approach on an array of NAND flash.

I/O SUBSYSTEM EMULATION

The I/O subsystem in today's operating systems includes a common interface for block-based applications, such as file systems, volume managers, and applications, to access persistent data (storage). VSL utilizes this block interface to present ioMemory modules (i.e. ioDrives) to the operating system as easily accessible block-based storage that existing file systems, volume managers, and applications can use just like a conventional disk.

VIRTUAL MEMORY SUBSYSTEM EMULATION

The virtual memory subsystem abstracts logical data addresses from their physical location by creating a directory of data locations. In modern OSs, a 64-bit virtual address space is used to organize and partition data used by the applications and users. Below this virtual address space lays the physical RAM, which has a much smaller address space. Operating systems and applications use this virtual interface to RAM (called the page table) to look up the physical location of data using a directory rather than requiring massive quantities of RAM just to satisfy each application's memory address space.
Similar to page tables in the host virtual memory subsystem, VSL virtualizes Flash via "block tables." VSL translates block requests to physical ioMemory addresses, also analogous to the virtual memory subsystem. It's important to note that these block tables are stored in host memory. This is a key advantage over other solid-state architectures (e.g. SSDs) that store block tables only in embedded RAM, where block tables are accessible only behind legacy storage protocols.

KEY BENEFITS OF VSL

  • Direct storage Access. With VSL, the CPU seamlessly interacts with ioMemory as though it were just another memory tier below DRAM. VSL provides direct access from each CPU core to the Flash media across the system bus, independent of other cores, and in parallel. This access results in extremely low latency, near linear performance scaling, and minimal performance degradation with mixed read/write workloads.
Without VSL, SSDs must serialize access through RAID controllers and use embedded processors to perform block mapping. As data is copied and re-copied through multiple layers of memory and embedded processors, the result is unnecessary context switching, queuing bottlenecks, and I/O storms, which all increase latency.