Can you provide some details about Cypress FFS with respect to DMS RAM and ROM size?
RAM usage for Cypress FFS scales with number of erase blocks (block mapping table) and pages per erase block (page mapping table). RAM usage can be modified by the configuration options. ROM usage (code size) neither varies with disk size, nor with configuration options. ROM usage can vary significantly based on the processor and compiler. Compiler optimization levels make a 10~15% difference. For ARM processors, thumb mode can reduce code size, sometimes at the cost of performance.
To save a few kB, users can identify unused code with a profiler and comment it out, but this is a significant effort and includes some risk. The user could reduce ROM size by replacing the file system layer with a simple manager on top of the block driver, or have their application call the block driver directly. The Cypress BD ROM size could have been increased if FTL_RPB_CACHE was enabled. By default, this is off (FTL_FALSE in ftl_if_ex.h). Instead of Cypress FFS, the customer could use DMS, which requires 15 to 20 kB ROM size.
DMS provides more efficient RAM usage for smaller disks (less than 100 erase blocks). DMS requires minimum 1 erase block overhead, while Cypress FFS requires minimum 5 erase blocks overhead. The RAM size is higher for DMS than for Cypress FFS. The RAM size for DMS scales with number of erase blocks, like Cypress FFS, but DMS scales at a much higher rate. This is one of the reasons we have been focused on Cypress FFS in recent years. As our device densities have grown, the RAM required for DMS was unreasonable.
For 512 erase blocks, Cypress BD would require around 32kB RAM, DMS would require around 875kB RAM. For 32 erase blocks, Cypress BD would require around 2kB RAM, DMS would require around 53kB RAM. Stack size is noted in slide 8: Stack usage is not included in these numbers and depends on module configurations and combinations, for the complete Cypress FFS in this version and configuration the max. Stack usage is expected to be in the range of 4600 Bytes according to the RVDS3.0 static call graph analysis. However, we are not aware of any configuration options that can be used to significantly decrease the stack usage.