File as temp data storage

Hi all,
If my application can not establish connection with the server it is storing 8000 records in files.After connection with the server resumes it sends those records to server which stores it in database.I want to know how should I store records in files?Thanx in advance.
With regards, Ajse

Eh? That's going to be application-specific.
Unless you're just asking how to write to files generally.
If so, look at the java.io package and related tutorials.

Similar Messages

  • WRT160NL changing file sizes on data storage

    I have WRT160NL with Firmware Version: 1.0.02. After 3 days of trying to connect my WD Elements 3TB I succeeded with 2 partitions of 1TB and one of 700 Gb not visible from router. I'm ok with that. The problem now is when I copy my files. I'm trying to create TV Series library for my Apple TV. Everything is working fine except the size of the files. When I'm copying files from PC USB they are corrupted on the network storage and not accessible. When I'm copying one file of size 350 Mb on the Network Storage via WIFI working fine but is saved with size of 2,7 Gb. General I have a lot of problems with this router and I'm very disappointed.
    Solved!
    Go to Solution.

    Fixed:
    It was NTFS. I upgraded the router with latest firmware today 1.0.03. Had the same issue. I change the HDD partitions on 2 and formatted the disk on FAT32 and everything is OK for now. All of 3TB is accessible and the files are copied and reproduced just fine. The speed of the network also has changed for the better 5Mb/second via WiFi before I had 1,5Mb/second. Had some problems with one of the Access point after the firmware upgrade but fixed later.
    Generally if you Storage is more than 2TB the router had some problems but
    2 Partitions, FAT32 and the latest firmware 1.0.03 will fix the problem.

  • File path of open data storage

    Hello all!
    Now I'm using the blocks of open data storage, write data and close data storage for storing and extracting result data. For the file path issue, before I
    set the data path by double clicking the "open data storage" block and inserting the file location in the indicated place, and that worked!
    Now since I made a stand alone application of this program and shall use it in other computers, the file location I inserted in open data storage block isn't
    valid any more in other PCs. So I modified my source code by connecting a "current vi path" to the open data storage block's file path node instead of
    inserting it inside the block, and this doesn't work! During running there shows an error in the write data block saying that the storage refnum isn't valid!
    I'm wondering why I couldn't specify the file path like this. Any way to allow me to specify the file path as the current vi path?
    Thanks!
    Chao
    Solved!
    Go to Solution.

    You need to account for the path changes when built in to an application, have a look at this example.
    https://decibel.ni.com/content/docs/DOC-4212
    Beginner? Try LabVIEW Basics
    Sharing bits of code? Try Snippets or LAVA Code Capture Tool
    Have you tried Quick Drop?, Visit QD Community.

  • Repl. Partioning BSO to ASO: increase of size of .dat file in temp-folder

    Hello,
    we are shifting data from a BSO Cube to an ASO cube via replicated partitioning. The partitioning takes about 50 minutes to execute.
    Size of .dat in metadata-folder: 8 mb
    Size of .dat in default-folder: 150 mb
    Size of .dat in temp-folder: 38 gb
    Does anyone have an explanation for the enormous size of the .dat file in temp-folder?
    Many thanks in advance!
    Michael

    I am doing the same BSO to ASO. My ess00001.dat in default is 1.9GB, in metadata it is 8.2MB, the OTL file in <db> is 18MB and the outline has about 10,000 members (rough guess). Our partition replication script looks like this:
    login <user> identified by <password> on <server>;
    spool on to <logfile>;
    refresh replicated partition <srcBSO_App>.<srcBSO_DB> to <tgtASO_App>.<tgtASO_db> at <server> updated data;
    Exit;
    I have a second process running in a task scheduler that is continuously updating the aggregates in the ASO cube. Perhaps that is cleaning out my temp .dat. The MaxL command it calls is:
    execute aggregate selection on database <tgtASO_App>.<tgtASO_db> based on query_data;
    Please check out the post I put on the other thread about how we run MaxL from a calc script and other thoughts on "round tripping" Planning-ASO-Planning. Another trick: Retrieve speed is dramatically improved by disabling and working around the @XREFs.

  • Should We consider Temp Data Files While Estimating The Database Size

    Hi,
    The Database Size is sum of physical files like
    Control file
    redo log file
    datafiles
    temp files
    so i want to know why are we considering the temp files..Because it's temporary. At one stage of database, temp size it could me more and at one stage it could be less.
    So why consider the temp file???
    Please share your views on it..
    Thanks
    Umesh

    So, in essence the size of your datafiles is the size of your tablespaces?No. The size of the tablespace is the sum of the sizes of the datafiles in the tablespace --- i.e. the datafiles determine the tablespace size, not the other way round.
    (Although when you CREATE or ALTER TABLESPACE, you specify the sizes of the datafiles that you want to belong to the tablespace).
    the temporary tablespace has space allocated to it regardless of whether there are temporary tables in that tablespace or not.Two points here :
    1. On most OSs the temporary tablespace tempfile is created as a "sparse" file. So, if you issue a CREATE TEMPORARY TABLESPACE TEMP TEMPFILE 'xyz.dbf' SIZE 1000M; and then did an "ls -l" at the OS level, "xyz.dbf' would appear to be only a few tens of KBs in size. The OS "grows" the file to 1000M as necessary.
    When talking to your OS administrator ensure that you get 1000M (or the AUTOEXTEND MAXSIZE !!) space allocated even though he might "see" only a few 10s of KBs used on the first day.
    2. The temporary tablespace does not have objects (other than "global temporary tables" that overflow from memory to disk). It is really temporary space for joins, sorts, order bys etc.
    So, your datafile size is not affected regardless of your temporary tables coming and going.Yes, your datafile sizes and tempfile sizes are independent. Yet, when "sizing" disk space for the database you must include the tempfile size. However, when reporting to IT Management with a statement "our database size is ".. you might want to break it up into components like Data Dictionary, Tables, Indexes, TemporarySpace, RedoLogs and ArchiveLogs. You could also differentiate between OS-allocated space (sizes of datafiles) and Oracle-allocated space (sizes of segments) and actual used space (which you'd have to compute !) .
    Hemant K Chitale
    Edited by: Hemant K Chitale on Feb 17, 2011 10:42 PM
    Added (Although ....) paragraph to first point.

  • Temp data file

    hello
    As read somewhere that if we delete temp data file in oracle 10g, on restarting the database it automatically creates the new temp file. So deleted that file from oradata folder. Restarted the database but not creating new temp file.
    can anyone tell why this is so?

    But for XE it works:
    SQL> select name from v$tempfile;
    NAME
    C:\ORACLEXE\ORADATA\XE\TEMP.DBF
    SQL> shutdown abort
    Instance ORACLE arrÛtÚe.
    SQL> exit
    DÚconnectÚ de Oracle Database 10g Express Edition Release 10.2.0.1.0 - Productio
    n
    C:>del c:\oraclexe\oradata\xe\temp.dbf
    C:>dir c:\oraclexe\oradata\xe\temp.dbf
    Le volume dans le lecteur C s'appelle OS
    Le numéro de série du volume est D298-E605
    Répertoire de c:\oraclexe\oradata\xe
    Fichier introuvable
    C:>sqlplus / as sysdba
    SQL*Plus: Release 10.2.0.1.0 - Production on Jeu. Ao¹t 26 15:56:30 2010
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    ConnectÚ Ó une instance inactive.
    SQL> startup
    Instance ORACLE lancÚe.
    Total System Global Area  805306368 bytes
    Fixed Size                  1289996 bytes
    Variable Size             218104052 bytes
    Database Buffers          583008256 bytes
    Redo Buffers                2904064 bytes
    Base de donnÚes montÚe.
    Base de donnÚes ouverte.
    SQL> select name from v$tempfile;
    NAME
    C:\ORACLEXE\ORADATA\XE\TEMP.DBF
    SQL>and in alert log at restart:
    Thu Aug 26 15:57:05 2010
    Re-creating tempfile C:\ORACLEXE\ORADATA\XE\TEMP.DBF

  • Temp data file corrupted.

    Hi.
    2 nodes RAC , Db version :  11.2.0.1
    One of temp file  is Temp table space is corrupted on node 1 .
    So we want to drop the corrupted temp file and create new one.
    Can some one please provide the  Steps to drop and create temp file on RAC>
    Do it requires any down time ?
    -Thanks

    Hi,
    - We will need to create a new temporary tablespace
    - ALTER DATABASE DEFAULT TEMPORARY TABLESPACE <new temp tablespace>; - This make sure all your users start using the new temp tablespace. You cannot drop a default temporary tablespace unless you assign another temporary tablespace.
    - Make sure that there are no other users using the old temp tablespace by querying v$sort_usage.
    - Once confirmed, you can drop this old tablespace(tbs) and start using the new temp tbs you created. If you want to use the old temp tablespaces, recreate it with the same name and repeat the procedure.
    Regards,
    Suntrupth

  • Download fails, PART file in temp folder

    I rebuilt my PC on the 8th June and downloaded the latest FireFox and installed it. Within a day or two download window stopped appearing. I went into Tools, Option, Downloads to check that the window should be appearing. It should be. I noticed that their was nothing in the downloads box and picking Browes failed to produce a folder selection dialogue. I then looked in the %temp% (win XP) folder and found a .part file. Removing the .part extension allowed the file to be opened.
    Having googled the problem I looked at the about:config and found that the browser.download.dir & browser.download.lastDir keys were missing. Recreating them didn't fix the problem.
    Any suggestion on what to try next would be appreciated.
    Firefox version: 3.6.3
    Cheers
    Ian
    == Possibly after installing Adobe 9

    I had this same problem, and followed instructions from here with success (mentioned above, thank you!)
    http://kb.mozillazine.org/Unable_to_save_or_download_files
    Manually delete download history:
    ''Firefox 3: Download history is stored in the downloads.sqlite file instead of downloads.rdf. [13] Because of improvements in Firefox 3 data storage [14] there should be little need to manually delete the downloads.sqlite file, except in cases of file corruption. [15] [16] If you do delete downloads.sqlite, you should also delete downloads.rdf, if it exists.''

  • How to export a data as an XML file from oracle data base?

    could u pls tell me the step by step procedure for following questions...? how to export a data as an XML file from oracle data base? is it possible? plz tell me itz urgent requirement...
    Thankz in advance
    Bala

    SQL> SELECT * FROM v$version;
    BANNER
    Oracle DATABASE 11g Enterprise Edition Release 11.1.0.6.0 - Production
    PL/SQL Release 11.1.0.6.0 - Production
    CORE    11.1.0.6.0      Production
    TNS FOR 32-bit Windows: Version 11.1.0.6.0 - Production
    NLSRTL Version 11.1.0.6.0 - Production
    5 rows selected.
    SQL> CREATE OR REPLACE directory utldata AS 'C:\temp';
    Directory created.
    SQL> declare                                                                                                               
      2    doc  DBMS_XMLDOM.DOMDocument;                                                                                       
      3    xdata  XMLTYPE;                                                                                                     
      4                                                                                                                        
      5    CURSOR xmlcur IS                                                                                                    
      6    SELECT xmlelement("Employee",XMLAttributes('http://www.w3.org/2001/XMLSchema' AS "xmlns:xsi",                       
      7                                  'http://www.oracle.com/Employee.xsd' AS "xsi:nonamespaceSchemaLocation")              
      8                              ,xmlelement("EmployeeNumber",e.empno)                                                     
      9                              ,xmlelement("EmployeeName",e.ename)                                                       
    10                              ,xmlelement("Department",xmlelement("DepartmentName",d.dname)                             
    11                                                      ,xmlelement("Location",d.loc)                                     
    12                                         )                                                                              
    13                   )                                                                                                    
    14     FROM   emp e                                                                                                       
    15     ,      dept d                                                                                                      
    16     WHERE  e.DEPTNO=d.DEPTNO;                                                                                          
    17                                                                                                                        
    18  begin                                                                                                                 
    19    OPEN xmlcur;                                                                                                        
    20    FETCH xmlcur INTO xdata;                                                                                            
    21    CLOSE xmlcur;                                                                                                       
    22    doc := DBMS_XMLDOM.NewDOMDocument(xdata);                                                                           
    23    DBMS_XMLDOM.WRITETOFILE(doc, 'UTLDATA/marco.xml');                                                                  
    24  end;                                                                                                                  
    25  /                                                                                                                      
    PL/SQL procedure successfully completed.
    .

  • TREX - Configuring Distributed Slave with Decentralized Data Storage

    I am creating a distributed TREX environment with decentralized data storage with 3 hosts.  The environment is running TREX 7.10 Rev 14 on Windows 2003 x64.  These are the hosts:
    Server 01p: 1st Master NameServer, Master Index Server, Master Queue Server
    Server 02p: 2nd Master NameServer, Slave Index Server
    Server 03p: Slave NameServer, Slave Index Server (GOAL; Not there yet)
    The first and second hosts are properly set up, with the first host creating the index and replicating the snapshot to the slave index server for searching.  The third host is added to the landscape.  When I attempt to change the role of the third host to be a slave for the Master IS and run a check on the landscape, I receive the following errors:
    check...
    wsaphptd03p: file error on 'wsaphptd03p:e:\usr\sap\HPT\TRX00\_test_file_wsaphptd02p_: The system cannot find the file specified'
    wsaphptd02p: file error on 'wsaphptd02p:e:\usr\sap\HPT\TRX00\_test_file_wsaphptd03p_: The system cannot find the file specified'
    slaves: select 'Use Central Storage' for shared slaves on central storage or change base path to non shared location
    The installs were all performed in the same with, with storage on the "E:" drive using a local install on the stand-alone installation as described in the TREX71InstallMultipleHosts and TREX71INstallSingleHosts guides provided.
    Does anybody know what I should try to do to resolve this issue to add the third host to my TREX distributed landscape?  There really weren't any documents that gave more information besides the install documents.
    Thanks for any help.

    A ticket was opened with SAP customer support.  The response to that ticket is below:
    Many thanks for the connection. We found out, that the error message is wrong. It can be ignored, if you press 'Shift' and button 'Deploy' (TREXAdmin tool -> Landscape Configuration).  We will fix this error in the next Revision (Revision 25) for TREX 7.1.

  • Uploading to online data storage: Now cannot download songs into iTunes

    Hello all.
    I am not seeing this one on any FAQs, knowledge bases, or discussion boards yet:
    I am doing my initial upload of files to SugarSync, a highly-recommended online data storage service. Since I started, I cannot download songs into iTunes, even from the iTunes Store.
    On any new downloads from the iTunes Store, the song DOES appear in the Music library view but immediately after completing the download it shows the exclamation point warning that the file is not in its location. The new folders get created properly within the iTunes Music folder but the subfolder where the song should be is empty. Each time I had to write iTunes Support to get them to make the files available.
    I can still add files manually, no problem. E.g., I can add *.mp3 or *.wav files or folders, I can convert them to AAC, etc. The glitch seems to occur only when automated loads into iTunes are operating.
    As a test, I downloaded a song from Amazon. Here the problem was different but I could work around it manually. Normally the Amazon process loads songs automatically into iTunes, too. Here again, the download did create the proper folder in the iTunes Music folder, as it should, but this time the symptoms were reversed:
    (a) the mp3 file WAS in its folder as it should be (w/the iTunes DL, the file was NOT there)
    (b) the song did NOT appear in the iTunes Music view (w/the iTunes DL, the song DID appear)
    (c) I was able to browse to the file and tell it manually to load into iTunes (w/iTunes I had to write Support and wait a day).
    (I wonder what's the cause of the differences between the two cases.)
    Strictly speaking, I can't PROVE the problem has anything to do with SugarSync (which otherwise seems good so far), but the DL problem started as soon as I started using it. Something in the SugarSync upload or file-monitoring process, or an odd thin gin iTunes, seems to be preventing automated, direct loads into iTunes. And since the data service runs in background, so it can monitor file changes, that might mean I can't buy music anymore! Obviously that would be a dealbreaker with SS. (I have contacted SS on this but they've not had fair chance to reply yet.)
    1) Anyone else have this problem?
    2) Is this permanent or just temporary while I am doing the initial upload?
    3) Anyone know a solution?
    (FYI, I am a highly-experienced user and otherwise quite handy with iTunes files, library moves, and backups. My library is entirely consolidated and all AAC.)
    Thanks.
    (Oh, and this occurred in both iTunes 8 and the new iTunes 9, so it seems unrelated to the upgrade this week.)

    UPDATE 1. CHANGING BROWSER HELPED -- OR DID IT?
    I called Apple iTunes Support, who said the problem is new to them. The technician's hypothesis was that something, perhaps browser-related, was interfering with the initial creation of a temporary file (which should go to the Recycle Bin) that instead causes the completed file to go to there.
    He noted that iTunes, though not going through one's browser onscreen, does use settings within one's default browser. I use Mozilla Firefox, so we switched to IE as the default browser, restarted iTunes, and the song downloaded with no problem! Then I switched back to Mozilla, restarted iTunes, and it worked AGAIN with no problem!
    (Dutifully I advised SugarSync, which is still investigating.)
    UPDATE 2: ARTIST NAME CHANGE - SOME FILES GOT MIS-MOVED / MIS-CHANGED
    Definitely something still wrong. This time some pre-existing song entries (not new downloads) lost their connection to their source file.
    In iTunes, which manages folder names for artists and albums automatically, I corrected the spelling for an Artist, so immediately iTunes renamed the folder, and automatically SugarSync noted the change to be uploaded. While the changed folder name and all the songs within were still uploading to SS, in iTunes I saw exclamation points come up -- but only for some of them. Most files got moved or changed correctly, but several lost connection to their file (i.e., the file was removed for the original misnamed folder but never moved into the correctly-named folder). Weird.
    Worse, in only some of those cases did I find the missing *.m4a file in the Recycle bin. (I had to retrieve old, original *.mp3 versions from another folder and re-import each into iTunes manually.) I've never seen iTunes have a problem managing an Artist rename until I started using the live SS process.
    (I've reported this to SS and asked if there is a way to disable temporarily SS to see if that's the problem.)
    [Note: I am willing to try downloads again but I am wary of trying to rename entire Artists (Folder) again. That was a lot of work.]
    ====
    UPDATE 3: SERIES OF TESTS - 1 FAILURE USING iTUNES
    Still problem occurs, but not always. Today, I rebooted PC. I tried CD, iTunes, & Amazon. I varied having the browser open when using iTunes.
    Here are the results of a series of attempts to download songs. "FAIL" means the file did not load properly into iTunes or loaded but lost its connection (exclamation point warning).
    # Source Mozilla #Songs Result
    1 CD Closed 1 OK
    2 iTunes Closed 1 OK
    3 Amazon Open 2 OK
    4 iTunes Open 1 FAIL
    5 Amazon Open 1 OK
    6 iTunes Closed 1 OK
    7 Amazon Open 2 OK
    8 iTunes Open 2 OK
    (I reported this to SS. Hoping they'll test and find the problem.)

  • Data storage and read

    Hello all,
    I got a problem in data storage and read. I used the combination of "Open data storage", "Write data" and "Close data storage" to store some data as an array as shown below.
    And used the inverted combination to read data as shown below:
    As shown the data file is in tdm form. This works fine in my computer, both the VI and the stand alone application. However when I run the stand alone application in a target pc, the storage file can't be generated!
    I don't know if it's the problem of my code or the problem of the target pc, since the target pc has tiny memory card with few drivers installed. I'm wondering if anybody could help me fix this problem. If there's other way I can store and read the file? or if I can make the target pc generate the tdm file.
    Thanks!
    Chao
    Solved!
    Go to Solution.

    What error is being presented when the file isn't being generated?  Is it an error with permissions?  Or an error with a folder not existing?  Can you manually make a file in the location where you expect the file to be generated?
    Unofficial Forum Rules and Guidelines - Hooovahh - LabVIEW Overlord
    If 10 out of 10 experts in any field say something is bad, you should probably take their opinion seriously.

  • Trouble with Data Storage VI's

    I am new to LabView Data Storage, I wrote a very basic program to write data to a data base and to read the data. for some reason I am unable to read the data. I am attaching the VI's, Please can any one tell me what I am doing wrong in the VI's.
    Thank you,
    Mudda.
    Attachments:
    Read DataBase.vi ‏156 KB

    Mudda,
    I modified your code and I'll attach it here for you to look at. First of all, you need to tell the data storage open what function to perform (e.g. Open, creat, or replace). Then, you need to make sure the file you're writing to is a .tdm file. Finally, you need to remove the "Signals" terminal from your read and write vi's. To do this, double click on the vi and uncheck the box for "Show terminals for data channel". If this is checked and the "signals" terminal is visible, then the refnum will not pass any info on the file unless a signal is actually connected. So take a look at the code and see if you have any questions.
    Tyler S.
    Attachments:
    write.vi ‏108 KB

  • 2.23 Apps must follow the iOS Data Storage Guidelines or they will be rejected

    My Multi Issue v14 App (24124) was just rejected by Apple. Apparently because storage of the data (folios?) was not iCloud compatible.
    Is this related to v14? Would building a v15 app resolve the issue?
    or is there some other problem?
    Please advise...
    Full text of Apple rejection below...
    Nov 4, 2011 08:17 PM. From Apple.
    2.23
    We found that your app does not follow the iOS Data Storage Guidelines, which is not in compliance with the App Store Review Guidelines.
    In particular, we found magazine downloads are not cached appropriately.
    The iOS Data Store Guidelines specify:
    "1. Only documents and other data that is user-generated, or that cannot otherwise be recreated by your application, should be stored in the /Documents directory and will be automatically backed up by iCloud.
    2. Data that can be downloaded again or regenerated should be stored in the /Library/Caches directory. Examples of files you should put in the Caches directory include database cache files and downloadable content, such as that used by magazine, newspaper, and map applications.
    3. Data that is used only temporarily should be stored in the /tmp directory. Although these files are not backed up to iCloud, remember to delete those files when you are done with them so that they do not continue to consume space on the user’s device."
    For example, only content that the user creates using your app, e.g., documents, new files, edits, etc., may be stored in the/Documents directory - and backed up by iCloud. Other content that the user may use within the app cannot be stored in this directory; such content, e.g., preference files, database files, plists, etc., must be stored in the /Library/Caches directory.
    Temporary files used by your app should only be stored in the /tmp directory; please remember to delete the files stored in this location when the user exits the app.
    It would be appropriate to revise your app so that you store data as specified in the iOS Data Storage Guidelines.
    For discrete code-level questions, you may wish to consult with Apple Developer Technical Support. Please be sure to include any symbolicated crash logs, screenshots, or steps to reproduce the issues when you submit your request. For information on how to symbolicate and read a crash log, please see Tech Note TN2151 Understanding and Analyzing iPhone OS Application Crash Reports.
    To appeal this review, please submit a request to the App Review Board.

    You might want to check out our ANE (Adobe Native Extension) solution that enables your FB projects to abide by the Apple's Data Storage guidelines.
    https://developer.apple.com/library/ios/#qa/qa1719/_index.html
    Do Not Backup project:
    http://www.jampot.ie/ane/ane-ios-data-storage-set-donotbackup-attribute-for-ios5-native-ex tension/
    David
    JamPot.ie

  • Non-server data storage

    A friend of mine is developing a database for a specific environment:
    A small number of peer-to-peer networked Windows computers in a small office, with no dedicated servers - each computer acting as an individual's workstation. They want a "central" database to store client information etc, with an interface they can all use at once to access and change the data.
    Given the nature of the environment, my first thought was that some sort of file-based data storage (not requiring a server process) would be most appropriate - Access, csv, XML....but I'm not intricately familiar with JDBC support for these mechanisms, so wasn't sure what to recommend specifically.
    They are not willing/able to spend any money on this solution, so it must use the current environment. Can someone recommend a data storage method and point me to an appropriate JDBC driver?
    Oh, and while I have your attention - anybody know of a good CSV parser? I'm currently splitting a line of data by commas, but it's also splitting strings with commas in them...

    A friend of mine is developing a database for a
    specific environment:
    A small number of peer-to-peer networked Windows
    computers in a small office, with no dedicated servers
    - each computer acting as an individual's workstation.
    They want a "central" database to store client
    information etc, with an interface they can all use at
    once to access and change the data.
    based on this i have a non-Java solution to suggest.
    use MS-Access and IIS to develop and deploy an intranet.
    there are several pros to this solution that i see
    - You can develop and deploy an intranet using web browsers as clients very quickly and relatively cheaply
    - an intranet (series of web pages) vs. a full blown application may well be easier to make changes to
    - having the data in a RDBMS (Access) which will be cost effective in this case also will make it relatively simple to upgrade port the system later
    now i like programming in java as much as the next person but from your requirements it sounds like writing an application might be overkill. in my experience doing an intranet like this is a pretty good solution.. you don't have to install anything on the clients... you already have the software for the server (if you don't have IIS most versions of Windows now have PWS or Personal Web Server which will work for this)... the important thing is to have a good database design so that you can make changes or port the client easily later if you need to.

Maybe you are looking for