When I interviewed to JCAPS Candidates, I frequently asked the following question.
Source Application will generate 4 GB file and ask JCAPS to pick it up through FTP or SFTP. After transforming, JCAPS needs to do ftp the output file to target ftp server. However, JCAPS domain has only 1GB memory configuration. Given this scenario, what is the best architecture you can provide, minimally giving impact on the other interfaces? |
Most candidates gave wrong answers. Some of them said that Source Application needs to split the file into small ones for JCAPS. (No way. They don’t want to change anything)
Very few people said ‘Use Streaming adapter for FTP’. I believe most clients who need to handle the big file through FTP/SFTP used this approach.
Personally, I think, this is not best approach. Because get() and put() ftp method is always loading data into memory after generating temporary file. Especially, if you have common MFT (Managed File Transfer) architecture, you don’t need to load any single pieces of source data into memory.
We will explore how MFT(Managed File Transfer) retrieves and transfers Big File in JCAPS and how JCAPS transforms this file. To simplify my work, I will reuse jcdFileMappingExample Colloboration. Here is test environment information.
|
For the detailed setup and test scenarios, refer to the below document and sample zip file.
Example3_Batch_BigFileHandling_For_JCAF.doc
Example3_Batch_BigFileHandling.zip
No comments:
Post a Comment