- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am updating my database from a structure received via SPI. After receiving, I update all of the characteristics using the corresponding store_in_db calls. I'm finding that it's taking far longer than I expect to accomplish this and am concerned about taking up all this time in a timer callback context.
16 characterisitcs are written. All are mix of UINT8 and UINT16 values (28 bytes total). 10 are notifiable and the other 6 are not. In the traces I am measuring between the last two FC (Flow Control) toggles as they occur just before and just after the database update calls in my code. I am using the fine timer @ 20ms to kick off the updates which accounts for the N-N-N case being so high. Device is not connected during test.
Clean builds between measurements. Traces enabled/disabled from makefile via BLE_TRACE_DISABLE. Write and notify flags controlled by arguments to store_in_db calls.
Trace=Y Write=Y Notify=Y: 166ms
Trace=N Write=Y Notify=Y: 74ms
Trace=N Write=Y Notify=N: 69ms
Trace=N Write=N Notify=N 18ms
Taking out the 18ms gives:
Y-Y-Y: 148ms
N-Y-Y: 56ms
N-Y-N: 51ms
Which means that it takes approximately 2ms to write a byte of data to GATT. What is the bleprofile_WriteHandle function doing? Is it committing every value to NVRAM under the hood? As expected, the trace calls have a significant overhead and there is no way of completely shutting them off in ROM functions, correct?
Is there anything I can do to optimize my writes or make this process more efficient (other than queuing up each attribute and processing them individually, one per fine timer tick)?
What's the consequences of taking 50-100ms in a timer callback?
Solved! Go to Solution.
- Labels:
-
ReadWrite Characteristics
-
SDK 2.X
-
Timers
- Tags:
- 857804
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
bleprofile_WriteHandle function only updates GATT database in the RAM. The function which writes to the NVRAM is bleprofile_WriteNVRAM. Indeed it is better to write a whole chunk into NVRAM rather than several small ones. If it is acceptable from your application point of view, it would be better to collect 64 bytes of data and then call Write NVRAM function.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have asked the developers if they could take a look and make some recommendations.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
bleprofile_WriteHandle function only updates GATT database in the RAM. The function which writes to the NVRAM is bleprofile_WriteNVRAM. Indeed it is better to write a whole chunk into NVRAM rather than several small ones. If it is acceptable from your application point of view, it would be better to collect 64 bytes of data and then call Write NVRAM function.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
So then it's your expectation that each byte stored in the database will take ~2ms to commit? That seems excessively slow to me. I am using the calls provided by the Smart Designer, which do NOT call writeNVRAM:
// It should be called when 'Device Name' is changed.
BOOL store_in_db_generic_access_device_name(UINT8* p_value, UINT8 value_len)
{
BLEPROFILE_DB_PDU db_pdu;
// Write value to the GATT DB
ble_trace2("write len:%d handle:%02x", value_len, HDLC_GENERIC_ACCESS_DEVICE_NAME_VALUE);
memcpy(&db_pdu.pdu[0], p_value, value_len);
db_pdu.len = value_len;
bleprofile_WriteHandle(HDLC_GENERIC_ACCESS_DEVICE_NAME_VALUE, &db_pdu);
return TRUE;
}