PSoC™ 5, 3 & 1 Forum Discussions
Hi.
I have purchased a PSoC5 and I want to use the TIA (transimpedance amplifer). Though I can’t find any information about the noise spectrum (input-referred noise, 1/f etc). Is it possible to obtain these information ?
Best regards
Show LessLooking for the best way to implement a firmware updates over the air. It will be a single application. After some thought, I've came up with some possible solutions.
Using Dual-application bootloader:
Since the Cortex-M3 memory cannot be remapped, 2 binaries would have to be distributed (application 1 and application 2). App 1 is running, OTA update function would download app 2, flash app 2, and set it as active in the metadata. If app 2 is running, OTA function would update app 1.
Seems like keeping up with 2 application versions would be a chore, but would be quicker to implement.
Modifying the bootloader:
Instead of running the Dual-app. Just run a single application. Partition off the flash to give space for the update. Download the update to the partition, once integrity is checked, reboot and enter the bootloader. Bootloader will reflash the application with what it downloaded.
Starting to lean towards this solution.
Open to any opinions. Thanks!
Show LessJust purchased MBR3 evaluation kit and installed EZ Click and played around with the module for a short while. Somehow after i clicked "Select Target device" and then choose external power, the EZ Click IDE freezes and I terminated the program and restart, after that the evaluation kit is not longer detected, in "Select Target device" window, Devices is always empty but there is Ports info
Show LessI'm using the emfile component to address an SD card in combination with other hardware devices having an SPI hardware. I don't have enough pins to use 2 SPI interfaces (one for the SD card and one for the other SPI devices). Can these 2 be multiplexed in one or other way?
Thanks for your suggestions
Kris
Show Lesshello all ,
i have a custom pcb with psoc 5 on it ,
i have 5 diffrent voltage inputs (3.3v , 2.5v, 2.4v ,1.1v ,1v) that i want to constantly monitor ,
for this task i want to use the Voltage Fault Detector block ,
the problem is that on the 3.3v rail i cannot seems to generate powergood signal
it always stays 0 , my settings - OV: 3.6V UV: 3.1V DAC range:4V ,
all others rails is generating powergood signals as expected,
i measured with scope and i know there is a stable 3.3V input ,
if i switch to Power Monitor block all rails is good ,
am i doing somting wrong ?
mybe the Voltage Fault Detector block just cant measure 3.3V ??
Thanks lampel
Show LessHello Team,
We have purchased Cy8ckit-001 EVK Board and CY8C5868-LP035 module. I need to implement CAN. We are planning to purchase CAN bus protocol Analyser(H/w and S/w). Please suggest us some CAN Bus protocol Analyser which works with Cy8ckit-001 EVK Board and CY8C5868-LP035 module.
Will APGDT002 - CAN BUS Analyzer Tool supports Cy8ckit-001 EVK Board and CY8C5868-LP035 module? (APGDT002 - MICROCHIP - CAN BUS Analyzer Tool | element14 India).
Thanks and Regards,
Abhishek Naik.
Show LessHello every one, Hello BOB:
I set the UART interrupt as 'Rx-on byte received' and keep other parameters as default, then writed a simple ISR as following:
CY_ISR(Isr_UART_Rx)
{
/* ISR Code here */
//uint8 rxStatus;
uint8 rxData;
rxData = UART_RXDATA_REG;
switch(rxData)
{
case '1':
LED1_Write(0u);
break;
.........
default:
break;
}
UART_TXDATA_REG = rxData;
}
I find that evey time immediately after I download the program into the chip,
the interrupt occurs whenever a character is received;
But when I powerdown and then powerup the board again, the interrupt always occures after four characters are received,
which means the Rx interrupt occurs not at byte received but at the receiving FIFO becoming full.
How can I make the interrupt ocurring at each byte received?
Show LessI have invested considerable amount of time in creating a DFB program intended to perform input streams' preprocessing. Because the default DFB simulator (v1.4) is next to useless, the development of this code required downgrade of PSocCreator version 4.1 to 2.2, because it is the last version that is able to run Chris Keeser's extended simulator, which at least shows you something. Cypress support didn't make my task easier either: all four of my recent bug reports have been marked as "Cancelled", despite the attached snippets which show direct violation of the DFB specification. They replied with their default "go away" message, that is they told me to
go to the community forum, as if the community members had access to the simulator sources or could adjust the misleading/strikingly wrong documentation. I don't know the reason for this hostility, but if its intended to repel customers, it surely works like a charm.
But to the point: the attached project has the "DSP" page, which contains a DFB instance together with ts program. It is fed by two DMA channels and the results are collected by another two. The exact input values are irrelevant, the problem is at the control flow level, not with incorrectly computed results. All four DMA channels work correctly, checked that with a few stage->hold forwarding snippets. This is all what main.c does: configure the testing environment and lets me see the debug signals on the scope. The code simulates well on both simulators (i.e. the DFB assembler 1.4's and Keeser's) and the obtained results are in full agreement with the C++ reference implementation. On a real chip it is an epic disaster. The DFB program is composed of two independent calculation engines, but finally they all boil down to the same task: compute a sequence of 3 4th order CIC filters, each decimating by 4. So the combined decimation factor is exactly 64. To cut the hardness by at least a half, I bypassed the 'monitoring' part, but even the much simpler SDR part is broken. The obvious sign of correctness would be the the frequency of the output DMA transfers: for 310,000 input samples per second the output should be 64 times less, i.e. 4843.75 samples per second. The scope shows 40..50kHz with no obvious pattern. Despite its name, the DFB ALU lacks any logical instructions, so the combined "to 64" counter is implemented as a packed array of 3 2-bit counters updated in a complex delta/compensator way. I've reused one of the semaphores to check how often the csb_cic_comb_integrate state is visited. Far too often. The desired scenario is as follows: enter csa_sdr_process_data 310e3 times per second, then go to csb_cic_comb_integrate every fourth cycle on each of the I/Q paths, which translates into two subsequent visits after each 8 input samples, because each path has a dedicated CIC filter. And then go further and move to the higher CIC level after every fourth of the already filtered fourth cycles, i.e. once per 16 cycles, then once per 64, then store the result in holdb. It was designed this way and this is what happens on the simulator.
I was trying to figure out what is going wrong this time with the Cypress tools, but ran out of steam. I'm at the verge of throwing the entire PSOC adventure to the dustbin and switching to the much better specified XIlinx Zynq family, but I regret all the spent time and money, so could you please have a look at the attached project and try to guess where the physical implementation of DFB diverges with its specification so strikingly that my code becomes useless?
Show LessHello people, here is my problem. I'm trying to interface the MAX31856 with the PSoC5. This chip communicates through an SPI Interface. I simply want to read its registers, I've already configured the SPI module and I'm printing the data in a LCD Display. My issue is that instead of showing me only the particular regiter i'm adressing, the display starts to show me secuentialy the data in all the registers one by one. Just to be clear i'm an armateur. Here is my code:
#include <project.h>
#include <cypins.h>
#define CJHF 0x03 //the register that i want to read
uint8 x;
int main(void)
{
LCD_Start();
Clock_Start();
CyGlobalIntEnable;
LCD_Position(0u,0u);
for(;;)
{
Pin_CS_Write(1);
CyDelay(1);
Pin_CS_Write(0);
SPIM_WriteTxData(0x01);
x = SPIM_ReadRxData();
Pin_CS_Write(1);
CyDelay(1);
LCD_PrintHexUint8(x);
CyDelay(500);
}
}
Show LessI am using a clock component to set the frequency of a UDB based I2C component (external clock mode). The frequency setting of this clock component can change from design to design.
I also have a timeout in my code that is dependent on the bus clock frequency - I currently have a manually configured #define for a scaling value for the timeout ... which is ugly as changing the clock frequency in the top level design and missing changing the scaling value in the source code my timeout can fail at run time.
I was hoping that the the generated code for the clock component would include a #define indicating the configured clock frequency however this does not appear to be the case.
Is there a simple way to determine the configured frequency of a given clock at build time? The IDE cetainly has an awareness of this as it indicates the Actual Data Rate of the I2C module when I change the input clock frequency.
I would also be happy to read it at runtime too, the clock API seems only to have the following appropriate API calls:
uint16 Clock_2_GetDividerRegister(void) ;
uint8 Clock_2_GetSourceRegister(void) ;
Currently the clock is sourced from the IMO (XTAL) ... is there an easy way to determine the frequency of this clock even.
FYI I am also running this code on PSOC4.
Show Less