What is the "catch" in writing Variants to a Binary file?

I'm saving "event" data associated with an experiment that LabVIEW is "controlling".  The data includes such things as the State of several State Machines, Messages to Queued Message Handlers, digital data from hardware (such as Button Pushes), etc.  My "Event Type" is a cluster with three components -- Event Time (an I32 representing ticks of a 1KHz clock), Event ID (an Enum with values such as State, Message, Digital In), and Event Data, a Variant that is the State Enum or Message String or Digital U8 port value.
I wrote a test routine in which I generated some known data of Event Type, wrote it to disk using Binary Writes, then read it back in to an Array of Events doing a "Read to End-of-File" Binary Read.
Somewhat to my surprise, the array of Events looked exactly correct, including the Variant data.
What is the "catch" in storing data this way?  I can see one obvious problem -- if I change the Event ID Enum, I could mis-identify the Events.  Are there things I need to worry about in handling the Variant data?
[I'm now thinking about hare-brained schemes to save the Event ID data in the file, itself, perhaps as a header of some sort.  And before you ask, I'm not quite ready to think about using TDMS ...]

crossrulz wrote:
<snip>
The enum I will have to play with...  
<mini-snip>
Deleting and/or adding in the middle could mess things up.
Oh, yeah, I got a little burned with that one.  Good thing it was just something I was pracicing on.  Because changing stuff in the middle actually changes the values that the text is associated with, it can throw off all the the selections throughout your project!  I was dismayed to find all the typedef'd enum constants were changed to reflect the new text associated with the values.  For instance, if the value for "Init" was "1" and "Exit" was "2" and I added "Test" in between, "Test" would become "2" and "Exit" would move to "3" and all the contstants that had "Exit" selected before now showed "Test" as the selection!
So if you have to add something, add it to the end, no matter how unintuitive it makes the enum.  If you decide to delete something from the middle, don't delete it; replace it with something like <not used> to keep the values aligned with the text.  It may look funky on the enum, but it will save you lots of heartache.
Is there a better way to do this?
Bill
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.

Similar Messages

  • What is the advent of writing xml mapping for servlet in tomcat4.1

    what is the adventage of writing xml mappings for servlet in tomcat4.1? why dont we write in tomcat 3.1?

    what is the adventage of writing xml mappings for
    servlet in tomcat4.1? why dont we write in tomcat 3.1?Tomcat 3.1 doesn't support it because it was not a common practice then. Tomcat 4.1 deployments were made J2EE compliant. Mapping in XML has an advantage of having a proper DTD for validation, which would be known to all. It serves as a common standard irrespective of the server you use.

  • Student Discount: What's the catch?

    What is the catch with the incredible price difference between the student software (found on such sights as
    http://www.journeyed.com/itemDetail.asp?ItmNo=44001031) and the full price software.  For instance the link I referenced is a link to the Adobe Creative suite 4: Design Premium for only $399.98, that is a $1,399 difference from the full price found on the Adobe website.  I realize you have to be a student, but is that really all?  Is there some sort of licensing restriction like you can’t use the program for commercial use if you are using the student version?  I would just like to know about any difference between them besides the obvious price difference.

    Adobe sure doesn't make it easy as there are at least 3 different versions that relate to education discounts - all with different licenses and prices.
    Take the Design Premium Suite for example: (exactly the same software - just different prices and probably, different restrictions.
    First, is the Education version at $599
    Then, there's a Student Edition at $399
    Finally, there's a Student Licensing version for $299
    The Student licensing version is definitely not for commercial use (*I think), and must be purchased through a campus store (not all schools are eligible).
    I believe the Education version can be used for commercial purposes (at least it used to be), and possibly even the Student Edition but I'm not sure about that. I looked fairly hard and couldn't find anything that explicitly states no commercial use for any of the versions. Your best bet would be to call Adobe.
    How would they know if you illegally used a version for commercial use? I doubt they would ... but don't overlook that karma thing.
    -phil

  • What is the cisco ironport C680 and M680 configuration backup file size?

    what is the cisco ironport C680 and M680 configuration backup file size?

    Size of the XML itself?  That is going to vary based on what you have configured, total lines of code, and # of appliances you may/may not have in cluster.
    M680, based on SMA as stand-alone, should be similar --- you are probably looking @ < 1 MB... 
    Looking @ my test environment, in which I have a nightly cron job set to grab a backup of...
    -rw-rw----  1 robert robert 161115 Sep 26 02:00 C000V-564D1A718795ACFEXXXX-YYYYBAD60A5A-20140926T020002.xml
    So, 161115 bytes = .15 MB
    -Robert

  • What are the settings for datasource and infopackage for flat file loading

    hI
    Im trying to load the data from flat file to DSO . can anyone tel me what are the settings for datasource and infopackage for flat file loading .
    pls let me know
    regards
    kumar

    Loading of transaction data in BI 7.0:step by step guide on how to load data from a flatfile into the BI 7 system
    Uploading of Transaction data
    Log on to your SAP
    Transaction code RSA1—LEAD YOU TO MODELLING
    1. Creation of Info Objects
    • In left panel select info object
    • Create info area
    • Create info object catalog ( characteristics & Key figures ) by right clicking the created info area
    • Create new characteristics and key figures under respective catalogs according to the project requirement
    • Create required info objects and Activate.
    2. Creation of Data Source
    • In the left panel select data sources
    • Create application component(AC)
    • Right click AC and create datasource
    • Specify data source name, source system, and data type ( Transaction data )
    • In general tab give short, medium, and long description.
    • In extraction tab specify file path, header rows to be ignored, data format(csv) and data separator( , )
    • In proposal tab load example data and verify it.
    • In field tab you can you can give the technical name of info objects in the template and you not have to map during the transformation the server will automatically map accordingly. If you are not mapping in this field tab you have to manually map during the transformation in Info providers.
    • Activate data source and read preview data under preview tab.
    • Create info package by right clicking data source and in schedule tab click star to load data to PSA.( make sure to close the flat file during loading )
    3. Creation of data targets
    • In left panel select info provider
    • Select created info area and right click to create ODS( Data store object ) or Cube.
    • Specify name fro the ODS or cube and click create
    • From the template window select the required characteristics and key figures and drag and drop it into the DATA FIELD and KEY FIELDS
    • Click Activate.
    • Right click on ODS or Cube and select create transformation.
    • In source of transformation , select object type( data source) and specify its name and source system Note: Source system will be a temporary folder or package into which data is getting stored
    • Activate created transformation
    • Create Data transfer process (DTP) by right clicking the master data attributes
    • In extraction tab specify extraction mode ( full)
    • In update tab specify error handling ( request green)
    • Activate DTP and in execute tab click execute button to load data in data targets.
    4. Monitor
    Right Click data targets and select manage and in contents tab select contents to view the loaded data. There are two tables in ODS new table and active table to load data from new table to active table you have to activate after selecting the loaded data . Alternatively monitor icon can be used.
    Loading of master data in BI 7.0:
    For Uploading of master data in BI 7.0
    Log on to your SAP
    Transaction code RSA1—LEAD YOU TO MODELLING
    1. Creation of Info Objects
    • In left panel select info object
    • Create info area
    • Create info object catalog ( characteristics & Key figures ) by right clicking the created info area
    • Create new characteristics and key figures under respective catalogs according to the project requirement
    • Create required info objects and Activate.
    2. Creation of Data Source
    • In the left panel select data sources
    • Create application component(AC)
    • Right click AC and create datasource
    • Specify data source name, source system, and data type ( master data attributes, text, hierarchies)
    • In general tab give short, medium, and long description.
    • In extraction tab specify file path, header rows to be ignored, data format(csv) and data separator( , )
    • In proposal tab load example data and verify it.
    • In field tab you can you can give the technical name of info objects in the template and you not have to map during the transformation the server will automatically map accordingly. If you are not mapping in this field tab you have to manually map during the transformation in Info providers.
    • Activate data source and read preview data under preview tab.
    • Create info package by right clicking data source and in schedule tab click star to load data to PSA.( make sure to close the flat file during loading )
    3. Creation of data targets
    • In left panel select info provider
    • Select created info area and right click to select Insert Characteristics as info provider
    • Select required info object ( Ex : Employee ID)
    • Under that info object select attributes
    • Right click on attributes and select create transformation.
    • In source of transformation , select object type( data source) and specify its name and source system Note: Source system will be a temporary folder or package into which data is getting stored
    • Activate created transformation
    • Create Data transfer process (DTP) by right clicking the master data attributes
    • In extraction tab specify extraction mode ( full)
    • In update tab specify error handling ( request green)
    • Activate DTP and in execute tab click execute button to load data in data targets.

  • What is the best way to export a .csv/.txt file to a MS Windows shared folder via a scheduled job?

    What is the best way to export a .csv/.txt file to a MS Windows shared folder, when the export process is a scheduled job?

    You can't directly, as scheduled jobs can only export to server-side data stores, which can only write to the landing area. You'll need to use an external script (e.g. batch file) to move the file from the landingarea after export. You can use an external task in the scheduled job to invoke the script.
    regards,
    Nick

  • What is the smallest data structure record in a .TXT file record to be recognized as an Apple Address Book "Data Card"?

    Hello! This is my first time in this discussion group. The question posed is the subject line itself:
    What is the smallest data structure record in a .TXT file record to be recognized as an Apple Address Book "Data Card"?
    I'm lazy! As a math instructor with 40+ students per class per semester (pCpS), I would rather not have to create 40 data cards pCpS by hand, only to expunge that info at semester's end. My college's IS department can easily supply me with First name, Last name, and eMail address info, along with a myriad of other fields. I can manipulate those data on my end to create the necessary .TXT file, but I don't know the essential structure of that file.
    Can you help me?
    Thank you in advance.
    Bill

    Hello Bill, & welcome aboard!
    No idea what  pCpS is, sorry.
    To import a text file into Address Book, it needs to be a comma delimited .csv file, like...
    Customer Name,Company,Address1,Address2,City,State,Zip
    Customer 1,Company 1,2233 W Seventh Street,Unit 543,Seattle,WA,99099
    Customer 2,Company 2,1 Park Avenue,,New York,NY,10001
    Customer 3,Company 3,65 Loma Linda Parkway,,San Jose,CA,94321
    Customer 4,Company 4,89988 E 23rd Street,B720,Oakland,CA,99899
    Customer 5,Company 5,432 1st Avenue,,Seattle,WA,99876
    Customer 6,Company 6,76765 NE 92nd Street,,Seattle,WA,98009
    Customer 7,Company 7,8976 Poplar Street,,Coupeville,WA,98976
    Customer 8,Company 8,7677 4th Ave North,,Seattle,WA ,89876
    Customer 9,Company 9,4556 Fauntleroy Avenue,,West Seattle,WA,98987
    Customer 10,Company 10,4 Bell Street,,Cincinnati,OH,89987
    Customer 11,Company 11,4001 Beacon Ave North,,Seattle,WA,90887
    Customer 12,Company 12,63 Dehli Street,,Noida,India,898877-8879
    Customer 13,Company 13,63 Dehli Street,,Noida,India,898877-8879
    Customer 14,Company 14,63 Dehli Street,,Noida,India,898877-8879
    Customer 15,Company 15,4847 Spirit Lake Drive,,Bellevue,WA,98006
    Customer 16,Company 16,444 Clark Avenue,,West Seattle,WA,88989
    Customer 17,Company 17,6601 E Stallion,,Scottsdale,AZ,85254
    Customer 18,Company 18,801 N 34th Street,,Seattle,WA,98103
    Customer 19,Company 19,15925 SE 92nd,,Newcastle,WA,99898
    Customer 20,Company 20,3335 NW 220th,2nd Floor,Edmonds,WA,99890
    Customer 21,Company 21,444 E Greenway,,Scottsdale,AZ,85654
    Customer 22,Company 22,4 Railroad Drive,,Moclips,WA,98988
    Customer 23,Company 23,89887 E 64th,,Scottsdale,AZ,87877
    Customer 24,Company 24,15620 SE 43rd Street,,Bellevue,WA,98006
    Customer 25,Company 25,123 Smalltown,,Redmond,WA,98998
    Try Address Book Importer...
    http://www.sillybit.com/abee/

  • What is the system doing when I empty a large file from the trash?

    I am in the process of deleting my old iPhoto library. When I installed OS X 10.10.3 it created a new Photos Library.photoslibrary and left the old iPhotos library as an archive. The old library is 249 gigabytes. I don't have "Empty Trash securely" on, and yet it is taking hours for the Finder to empty the trash with the 249 gigabyte iPhotos library in it. What is odd is that for a time, the amount of free space on my MacBook was increasing. It went from 77 GB to 88 GB, and then it started dropping all the way to 70 GB free. Now it is back up to 83.61 GB free.
    What is the Finder doing when it deletes a huge file? I realize that the iPhotos Library is not a single file, but contains hundreds of folders and thousands of files. Still I find it odd that it is taking it so many hours to delete, and that the amount of free space on my Mac goes up and down and up. Can someone explain this?
    It has been deleting for 4 hours and based on the "Emptying the Trash …" progress bar, it should take another two to three hours.

    Yes, but why would the space available go up and down and up as the files are deleted? If the Finder is deleting the files, shouldn't the free space on the drive be steadily increasing as the Trash is emptied?

  • What is the difference between httpd.pid and httpd.lock files?

    What is the difference between httpd.pid and httpd.lock files?

    Hi;
    Apache httpd saves the process id of the parent httpd process to the file logs/httpd.pid .
    LockFile
    Sets the path to the lockfile used when Oracle HTTP Server is compiled with either USE_FCNTL_SERIALIZED_ACCEPT or USE_FLOCK_SERIALIZED_ACCEPT. It is recommended that default value be used. The main reason for changing it is if the logs directory is NFS mounted, since the lockfile must be stored on a local disk.
    For example: LockFile /oracle/Apache/Apache/logs/httpd.lock"
    Please see:
    http://download.oracle.com/docs/cd/B14099_19/web.1012/b14007/fileloc.htm#sthref254
    Regard
    Helios

  • What's the Catch? Ads for Free MacBook or ipad

    On a game on Facebook, there are always side ads inviting you to 'test' out a MacBook and now an ipad which you get to keep if you participate? Does anyone know what the catch is????

    Generally with such things, the user must participate in several (dozens or more) surveys or other activities. Some of these activities require a credit card for trying out an item.
    In the end, actually obtaining the free item costs quite a bit of time & money.
    ~Lyssa

  • Reading the Blob and writing it to an external file in an xml tree format

    Hi,
    We have a table by name clarity_response_log and content of the column(Response_file) is BLOB and we have xml file or xml content in that column. Most probably the column or table may be having more than 5 records and hence we need to read the corresponding blob content and write to an external file.
    CREATE TABLE CLARITY_RESPONSE_LOG
      REQUEST_CODE   NUMBER,
      RESPONSE_FILE  BLOB,
      DATE_CRATED    DATE                           NOT NULL,
      CREATED_BY     NUMBER                         NOT NULL,
      UPDATED_BY     NUMBER                         DEFAULT 1,
      DATE_UPDATED   VARCHAR2(20 BYTE)              DEFAULT SYSDATE
    )The xml content in the insert statement is very small because of some reason and cannot be made public and indeed we have a very big xml file stored in the BLOB column or Response_File column
    Insert into CLARITY_RESPONSE_LOG
       (REQUEST_CODE, RESPONSE_FILE, DATE_CRATED, CREATED_BY, UPDATED_BY, DATE_UPDATED)
    Values
       (5, '<?xml version="1.0" encoding="UTF-8"?><xml-response><phone-number>1212121212</tracking-number></xml-response>', TO_DATE('09/23/2010 09:01:34', 'MM/DD/YYYY HH24:MI:SS'), 1, 1, '23-SEP-10');
    Insert into CLARITY_RESPONSE_LOG
       (REQUEST_CODE, RESPONSE_FILE, DATE_CRATED, CREATED_BY, UPDATED_BY, DATE_UPDATED)
    Values
       (6, '<?xml version="1.0" encoding="UTF-8"?><xml-response><phone-number>1212121212</tracking-number></xml-response>', TO_DATE('09/23/2010 09:01:34', 'MM/DD/YYYY HH24:MI:SS'), 1, 1, '23-SEP-10');
    Insert into CLARITY_RESPONSE_LOG
       (REQUEST_CODE, RESPONSE_FILE, DATE_CRATED, CREATED_BY, UPDATED_BY, DATE_UPDATED)
    Values
       (7, '<?xml version="1.0" encoding="UTF-8"?><xml-response><phone-number>1212121212</tracking-number></xml-response>', TO_DATE('09/23/2010 09:01:34', 'MM/DD/YYYY HH24:MI:SS'), 1, 1, '23-SEP-10');
    Insert into CLARITY_RESPONSE_LOG
       (REQUEST_CODE, RESPONSE_FILE, DATE_CRATED, CREATED_BY, UPDATED_BY, DATE_UPDATED)
    Values
       (8, '<?xml version="1.0" encoding="UTF-8"?><xml-response><phone-number>1212121212</tracking-number></xml-response>', TO_DATE('09/23/2010 09:01:34', 'MM/DD/YYYY HH24:MI:SS'), 1, 1, '23-SEP-10');
    Insert into CLARITY_RESPONSE_LOG
       (REQUEST_CODE, RESPONSE_FILE, DATE_CRATED, CREATED_BY, UPDATED_BY, DATE_UPDATED)
    Values
       (9, '<?xml version="1.0" encoding="UTF-8"?><xml-response><phone-number>1212121212</tracking-number></xml-response>', TO_DATE('09/23/2010 09:01:34', 'MM/DD/YYYY HH24:MI:SS'), 1, 1, '23-SEP-10');THe corresponding proc for reading the data and writing the data to an external file goes something like this
    SET serveroutput ON
    DECLARE
       vstart     NUMBER             := 1;
       bytelen    NUMBER             := 32000;
       len        NUMBER;
       my_vr      RAW (32000);
       x          NUMBER;
       l_output   UTL_FILE.FILE_TYPE;
    BEGIN
    -- define output directory
       l_output :=
          UTL_FILE.FOPEN ('CWFSTORE_RESPONCE_XML', 'extract500.txt', 'wb', 32760);
       vstart := 1;
       bytelen := 32000;
    ---get the Blob locator
       FOR rec IN (SELECT response_file vblob
                     FROM clarity_response_log
                    WHERE TRUNC (date_crated) = TRUNC (SYSDATE - 1))
       LOOP
    --get length of the blob
    len := DBMS_LOB.getlength (rec.vblob);
          DBMS_OUTPUT.PUT_LINE (len);
          x := len;
    ---- If small enough for a single write
    IF len < 32760
          THEN
             UTL_FILE.put_raw (l_output, rec.vblob);
             UTL_FILE.FFLUSH (l_output);
          ELSE  
    -------- write in pieces
             vstart := 1;
             WHILE vstart < len AND bytelen > 0
             LOOP
                DBMS_LOB.READ (rec.vblob, bytelen, vstart, my_vr);
                UTL_FILE.put_raw (l_output, my_vr);
                UTL_FILE.FFLUSH (l_output);
    ---------------- set the start position for the next cut
                vstart := vstart + bytelen;
    ---------- set the end position if less than 32000 bytes
                x := x - bytelen;
                IF x < 32000
                THEN
                   bytelen := x;
                END IF;
                UTL_FILE.NEW_LINE (l_output);
             END LOOP;
    ----------------- --- UTL_FILE.NEW_LINE(l_output);
          END IF;
       END LOOP;
       UTL_FILE.FCLOSE (l_output);
    END;The above code works well and all the records or xml contents are being written simultaneously adjacent to each other but we each records must be written to a new line or there must be a line gap or a blank line between any two records
    the code which I get is as follow all all xml data comes on a single line
    <?xml version="1.0" encoding="ISO-8859-1"?><emp><empno>7369</empno><ename>James</ename><job>Manager</job><salary>1000</salary></emp><?xml version="1.0" encoding="ISO-8859-1"?><emp><empno>7370</empno><ename>charles</ename><job>President</job><salary>500</salary></emp>But the code written to an external file has to be something like this.
    <?xml version="1.0" encoding="ISO-8859-1"?>
    <emp>
      <empno>7369</empno>
      <ename>James</ename>
      <job>Manager</job>
      <salary>1000</salary>
    </emp>
    <?xml version="1.0" encoding="ISO-8859-1"?>
    <emp>
    <empno>7370</empno>
    <ename>charles</ename>
    <job>President</job>
    <salary>500</salary>
    </emp>Please advice

    What was wrong with the previous answers given on your other thread:
    Export Blob data to text file(-29285-ORA-29285: file write error)
    If there's a continuing issue, stay with the same thread, don't just ask the same question again and again, it's really Pi**es people off and causes confusion as not everyone will be familiar with what answers you've already had. You're just wasting people's time by doing that.
    As already mentioned before, convert your BLOB to a CLOB and then to XMLTYPE where it can be treated as XML and written out to file in a variety of ways including the way I showed you on the other thread.
    You really seem to be struggling to get the worst possible way to work.

  • What is the best way to store data to a file? Serialization?

    FYI: I am some what of a novice programer. I have almost finished my degree but everything I know about Java is self taught (never taken a course in java). Any way here is my question.
    So I have an example program that I have created. It basically just has three textpanes which also have the ability to contain images. Now I would like to learn how to store data to a file for later retrieval and decided to use a program such as this as an example. I am not sure if I should use the serialization API that java has or some other means such as XML. If I used XML I was not sure how to store the image data that would be contained in the text panes. I figured serialization would take care of that if I just serialized my main class and then restored it. So That is that I tried first. Is this a good direction to go?
    The problem I am having is that I cant seem to get the serialization to work the way I need it to. I am not sure what I am doing wrong because I have read many tutorials and looked at allot of example code but still dont understand. The tutorial I looked at most is [this one at IBM.|http://java.sun.com/developer/technicalArticles/Programming/serialization/]
    The eclipse project folder for my example program can be found here:
    [http://startoftext.com/stuff/myMenuExp/]
    zipped version:
    [http://startoftext.com/stuff/myMenuExp.zip]
    The main class is mainwindow.java. I know the source is kinda dirty. Any comments are welcome but I am most interested in how to solve this serialization problem. Thanks
    -James

    DrClap wrote:
    What will the nature of the data be? Just a handful of strings? A bunch of objects of different types reflecting the current state of your program to great depth and complexity? Something else?The data will be what is contained in three text panes. Each text pane containing rich text and images. For an example take a look at the example program in my first post.
    How will the data be used? Just write it out when the app shuts down, and read it all back in when it starts up? Do you need to query or search for specific subsets of the data? Is there any value in the stored form of the data being human-readable?Basically the data will need to be saved when the application shuts down or when the user selects save from the file menu. The data will be restored when the user opens the file from the file menu. It would be nice if the stored data is human readable but its not of primary importance.
    How often will the data be written and read? How many reads/writes/bytes/etc. per second/minute? Not often. Just a simple open and save from the file menu.
    How large could the data conceivably get?It will probably be a few paragraphs of formated text and less then a dozen images per text pane.
    Will reading and writing of the data need to occur concurrently?no.
    Do you need to add new data to the storage as your app goes along, or just replace what's there with the most current state?Replace with the most current state of the data.
    If it's a simple read once at startup, write once at shutdown, overwriting the previous data, read only by your app, not by humans, then serialization via ObjectInput/OutputStream is probably the way to go. If there are other requirements, you may want to go with XML serialization, a DB, or some other solution.Thanks for the information. Serialization sounds like the way to go. Even if I end up using XML serialization in the end it would still be of interest to me to learn how to use serialization without xml.
    I was trying to go with using serialization in be beginning but I cant seem to get it to work. Would you be willing to take a look at my example program. I attempted to implement serialization but it does not seem to work. Could you take a look and see what I am doing wrong. At this point I am stuck as i cant seem to get serialization to work.
    I am going to go ahead and mark this thread as answered because I think I already got most of the information I was looking for except what I am doing wrong with my attempt as serialization.
    Thank you jverd for your time.

  • What´s the best approach to work with Excel, csv files

    Hi gurus. I got a question for you. According to your experience what's the best approach to work with Excel or csv files that have to be uploaded through DataServices to you datawarehouse.
    Let's say your end-user, who is not a programmer, creates a group of 4 excel files with different calculations in a monthly basis, so they can generate a set of reports from their datawarehouse once the files have been uploaded to tables in your DWH. The calculations vary from month to month. The user doesn't have a front-end to upload the excel files directly to Data Services. The end user needs to keep a track of which person uploaded the files for a determined month.
    1. The end user should place their 4 excel files in a shared directory that will be seen by DataServices.
    2. DataServices will execute certain scheduled job that will read the four files and upload them to the Datawarehouse at a determined time, lets say at 9:00pm.
    It makes me wonder... what happens if the user needs to present their reports immediately so they can´t wait until 9:00pm.  Is it possible for the end user to execute some kind of action (out of the DataServices Environment) so DataServices "could know" that it has to process those files right now, instead of waiting for the night schedule?
    Is there a way that DS will track who was the person who uploaded those files?
    Would it be better to build a front-end for the end user so they can upload their four files directlyto the datawarehouse?
    Waiting for your comments to resolve this dilemma
    Best Regards
    Erika

    Hi,
    There are functions in DS that captures the input files automatically. You could use file_exists() or wait_for_file() option to do that. Schedule the job to run every certain minute and if the file exists then run. This could be done by using a certain file name with date and timestamp etc or after running move the old files to archive and DS wait for new files to show up.
    Check this - Selective Reading and Postprocessing - Enterprise Information Management - SCN Wiki
    Hope this helps.
    Arun

  • What Is the best way to upload and download psd files from Photoshop to Photoshop Touch? [was:Re]

    What Is the best way to up load and down load psd files from desktop photo shop to photo shop touch? For on the go touch up or. I'm using psd files at 90 percent 300 depi when in photo shop on desktop. To ps touch

    Hi Bford225,
    I'd recommend using your favorite web browser and going to https://creative.adobe.com/files for uploading from the computer.
    Keep in mind tablet capabilities are much less than a computer, so large files might take a long time to download and be very system intensive to work on. Although you can import files up to 12 megapixels I'd recommend something more mid range, like 6 or 7 megapixels, ie 2880 x 2160 or 3072 x 2304.
    Also, PSD files are flattened when imported into Photoshop Touch.
    Hope that helps,
    -Dave 

  • What is the best method of backing up my digital files (catalog) in the Photoshop Elements Organizer

    What is the best method or service for backing up my digital files (catalog) in the Organizer from Photoshop Elements 12. Since there no longer is  the automatic  Elements sync available I do not know what to choose. I have tried to back this up using my external drive, but I cannot find the digital images per se. I see the entire program but not the catalog of pictures. Also, I have a windows operating system and Adobe  Revel offers no edit capabilities with this OS.

    I'm in a similar situation including movies I've purchased from iTunes...
    Here's my setup:
    I have all my iTunes data (music, movies, etc.) as well as about 10 GB of photos stored on a nework storage device that uses RAID-5 with SATA disks. This is basically a little toaster-sized NAS machine with four hard drives in it (www.infrant.com). If one of these drives dies, I get alerted and can insert a new one without missing a beat since the data is stored redundantly across all drives. I can also just yank out any one of the four drives while the thing is running and my data is still safe (I tried this after I bought it as a test).
    That's how I prevent problems with a single disk... Redundancy
    Now onto "backups"...
    Nightly, my little RAID toaster automatically backs itself up to an attached 250GB USB drive. However, these backups are only of my critical data... documents, photos and home movies... I don't bother backing up my "Hollywood" movies since I can live without them in a disaster.
    ... I actually don't permanently store anything I care about on a laptop or a desktop. It all goes on the NAS box (Network Sttached Storage) and then my other computers use it as a network drive. It's attached via Gigbait to my main computer and via wireless to most everything else.
    My achilles heel is that I don't store anything outside of my house yet. If I was smart, I'd alternate two USB drives attached to my NAS box and always keep one of them somewhere else (Safe Deposit Box?).
    ...and that's just one way to do it.
    -Brian

Maybe you are looking for

  • Rendering issue

    I've recently noticed a strange issue on my imac's screen (20", early silver version - about 3 years old) - it's hard to explain but here goes! When you move a window around on the screen, the picture splits either side of a horizontal line, where th

  • Apple ipod mini replacement available ?

    I have an ipod mini that doesnt responed to itunes or when i press the buttons. This ipod also doesnt have a long battery life this battery life doesnt even last an hour! Can i send this ipod to apple & will they give me a new one without paying a di

  • How can i decide the depth of all3 DMA fifo's are used as target to host at RT contoller side(host)?

    Hello,       I am using all DMA fifos,I want to acquire data from 3 AI modules upto 50 khz frequency.Sampling rate can be varied according to application.    please suggest me how should i allocate memory of my RT controller for those DMA fifos?I am

  • The middle crease with a shadow between two columns

    Hi, I'm looking at ibooks and see a middle crease with a shadow, like it's a book. How do I put that in my book? Now there's nothing there. The crease makes it look more like a book. Thanks, Elaine

  • Process Chains issue

    After scheduling the PCs, I got a warning message "No Data available" in a pc and the pc run stopped when it was loading Program Definition because there was no data available in R/3. I have changed the infopackages in developpment (if warnings => su