As it turned out it is not that easy to even import large data volume in order to test one or another service. One should be aware of possible difficulties and drawbacks of the most used services. Today is TeamDesk turn to show its worth.
TeamDesk import function offers two ways to import the data: through copy/paste and directly from file.
I have been already stuck with data import through copy/paste testing Dabble DB and my conclusion is this is not the way out for large data sets. So let’s kick off with import directly from file at once.
The system quickly checked the file and allowed to structure my data and create columns. As a matter of fact it couldn’t detect the data type on its own. TeamDesk recognizes it as text (what is set by default as I understand), but the system offered an option to indicate needed type manually.
It took me 8 minutes to import the file. While importing there was no progress indicator, or warning massage, or anything that showed me the process was running and there was a point to wait for so long.
But anyway I have got a positive result.
It was time to check out how I could work within the system. TeamDesk had no problems searching the data and building reports of such data volume.
Trying to test my app deeper I paid attention that the system doesn’t separate the data in pages. The thing is only 200 first records are shown at first and the rest of it is displayed by pressing the the link below.
I think there is no use in such option, even though the program showed me all 20 000 records as I pressed Show all without any problems.
My impression:
It seems TeamDesk system handles large data volumes easily, but some functions are to be tuned yet (as progress indicator, for example). And implementation process was not that cumbersome.
No comments:
Post a Comment