Data files archival and poling interval in XI

Dear Experts,
I appreciate if you could help me to know the best options for the below
1. How many days is ideal to keep the archived data files in XI
2. What is the best option for file polling interval in file adapter.
Thanks and Regards.
Yerragadda.

Hi,
1. How many days is ideal to keep the archived data files in XI
--> It depends upon the volume of data fiels archived as well as the importance of those archived data.
Instead of archiveing it on XI server, its better to maintain the archived files on some host (FTP). On sender side file adapter you can give the another path/location to maintain the archived files.
2. What is the best option for file polling interval in file adapter.
---> Polling Interval will be set when you need to pick the file and sent into XI. i,e when your scenrio is like File-XI-Idoc/RFC etc.
You may give the polling interval based on frequency of files, Either you can control the commmunication channels externally also.
While configuring a sender file adapter, a major mistake that we often do is keeping file adapter communication channel polling interval less and in test mode. It is not an error, keeping it in test mode with less polling interval but, often we forget about it, keeping the adapter channel active and it loads the server memory unnecessarily.
The link below will help you understand things better.
http://help.sap.com/saphelp_nw04/helpdata/en/e3/94007075cae04f930cc4c034e411e1/frameset.htm
http://help.sap.com/saphelp_nw04/helpdata/en/03/80a74052033713e10000000a155106/frameset.htm
http://help.sap.com/saphelp_nw04/helpdata/en/17/7481b6d5095b42bd804d1815201ebc/frameset.htm
Thanks
Swarup

Similar Messages

  • Repair Outlook Data Files (.pst and .ost)

    Dears,
    please i need help as my pst file has been corrupted, after doing scan through SCANPST but, the error have been occurred
    unfortunately the PST file is more than 7 Giga, and contain very important mails, is that solvable
    gratefully,

    Hallo Ahmed,
    Repair a outlook.pst file and .ost
    follow these simple and easy steps.
    Exit Outlook, then click Start > Computer.
    Browse to <drive>:\Program Files — or, if you see a Program Files (x86) folder on the same drive, browse to that instead. For example, C:\Program Files or C:\Program Files (x86).
    In the Search box, type Scanpst.exe.
    If the search doesn't find Scanpst.exe, try searching in the alternative folder mentioned in step 2, above — Program Files or Program Files (x86).
    Double-click Scanpst.exe.
    In the Enter the name of the file you want to scan box, enter the name of the .pst file you want the tool to check, or click Browse to
    select the file.
    By default, a new log file is created during the scan. Or, you can click Options and choose not to have a log created, or to have the results appended
    to an existing log file.
    Click Start.
    If the scan finds errors, you're prompted to start the repair process to fix them.
    Refer:  https://support.office.com/en-us/article/Repair-Outlook-Data-Files-pst-and-ost-64842a0a-658a-4324-acc6-ef0a6e3567cd?ui=en-US&rs=en-US&ad=US
    Danke Shon
    @Aglaja Berko

  • Target Data File Name and Path

    Hi,
    I'm trying to deploy a mapping that writes data into a file, but I need to dynamically set the name and the directory where I want to write this file. The only workaround I found was manually change the pl/sql generated code and change the values in the FOPEN call with a parameter, but this is no what I really want to do. Does anyone know how to tell WB not to hardcode the file name and path?
    I really appreciate you time,
    thanks in advance,
    Matias

    Carla,
    Unfortunately our releases do not go that fast...
    What you could do as an intermediate solution (this is what I would do) is create your mapping and load into a staging table (this can happen in set-based mode so would be fast). In a post mapping process you manually write to the flat file by making the calls to FOPEN etc. by selecting from the staging table. The post mapping process can have an input parameter that you dynamically pass (i.e. via a mapping parameter) and set the file name.
    Mark.

  • Hard drive error - Corrupt files - Archive and Install?

    Hi
    I have been using Onyx for a while and last week it asked me to reinstall the Snow Leopard disc.  I verified this in Disk Utility first and that also advised I had corrupt files.  I installed the disc and all seemed fine when i checked again in Disk Utility.  But now a week later I have the same problem and not sure whether it is a problem with 0SX or my hard drive.  My mac pro is still under Applecare and they advised tonight that I should do another archive and install and if does not fix then to take the machine into my local Apple store.
    Red part of Error message is - Invalid volume file count - it should be 362310 instead of 362313 and Invalid volume directory count - it should be 99776 instead of 99773. 
    The volume Macintosh HD was found corrupt and needs to be repaired.  - Error: This disk needs to be repaired etc.
    Has anyone else experienced the same problem.  I use Aperture 3 & Final Cut Express and until recently had problems getting the two to work together until Applecare helped.
    Many thanks
    Matt

    Other people have certainly experienced a corrupt HFS volume header, which is what you have, or did have. It should be a very rare occurrence. If it happens frequently, then your drive is probably failing and should be replaced.

  • Taking data file offline and online

    When I take a data file offline, which SCN get frozen? Is it the stop scn in the control file or is it the checkpoint scn in the data file header?
    Matt

    Hi,
    When I take a data file offline, which SCN get frozen? Is it the stop scn in the control file or is it the checkpoint scn in the data file header?
    Stop SCN is the SCN which gets recoreded in control file when datafile is backup mode or datafile is offline.
    Its checkpoint scn in Data File Header. As you the STOP SCN is going to Updated wehn ever there are changes with respective to the datafile.. as it implicitly says that STOP SCN, wit respective Offline datafile will not Update, until next scn generates (until the data file make online)
    - Pavan Kumar N

  • File archive and sender namespace

    Hi Guys,
    I have a requirement where xml generated from the system has a namespace urn:microsoft:abc, where as my namespace in the scenario is urn:name:objectname
    How can I read this file using sender file adapter??
    It throws an error ....
    Also, once the file is read I want to archive it in the directory..how can I do this??
    pls share ur thoughts

    Hi Ravindra,
    >>How can I read this file using sender file adapter??
    Where the error is thrown in Sender Communication channel? If yes then try using XMLAnnonymizer Bean
    /people/stefan.grube/blog/2007/02/02/remove-namespace-prefix-or-change-xml-encoding-with-the-xmlanonymizerbean
    https://websmp130.sap-ag.de/sap%28bD1lbiZjPTAwMQ==%29/bc/bsp/spn/sapnotes/index2.htm?numm=880173
    >>Also, once the file is read I want to archive it in the directory..how can I do this??
    There is a option to archive files (both successful and faulty) in Sender File adapter
    Regards
    Suraj

  • Diff. times are taken to complete load unload archive and unarchive

    Hi All,
    I am facing a problem with loading  unloading archiving and unarchiving repository.
    Problem is Everyday it is taking different to complete the process. Is the time deneds on the data and images availavle in the repository?
    If large amount of data is there then it will take more time to complete all these process or large amount of user are log in it will take more time to unload?
    Please Suggest
    Thanks

    Hi Shalini,
    The repository Archieving and Unarchieving operations involves the data base.
    I mean Repository  tables and fields and the data of the repository are in the underlying database and You can backup an MDM repository using the MDM archiving mechanism.That will save the repository scema .a2a file for furthere refrence.This schema contains the tables and fields inside the tables and all these things are storesd on the DBMS level.So suppose next time when you unarchieve the repository and made some changes like
    1.You have added some new Tables and Fields.
    2.You have added new Data.
    And all this is getting stored at the database leve at the end.
    In this case the amount of data of your repository in terms of Tables and Fields or Data at the DBMS Level.So next time when you  take archieve of your Repository It will take some extra time as compare to the previous one.And same thing will happen in case of Unarchieving because this time some new tables and their fields  and some new data(images,pdfs etc) they all will be unarchieved as a Part of Repository so it will take a bit extra time to Unarchieve.
    Same thing will happen in case of Loading and Unloading as the tables ,fields and amount of data increases the time to load and unload will also increases.
    At the time of loading the repository You can go to your MDM Server and check that repository in status it will show checking image ID's ,processing tables,processing sound tables,loading search indexes etc.
    Making a few minor adjustment with MDM and/or the DBMS server can enhance search and update performance by 10% to 20%.
    SQL Serveru2019s performance increases by about 10% when the main data file (.mdf) and the transaction log file (.ldf) are located on separate spindles. Remember that this means using different drives, not just
    different drive letters.
    Reward if Helpfull.
    Regards,
    Vinay Yadav

  • How to Dynamically Select the Data File for a Report at Print Time

    How do you configure a Crystal report to ask for the file to be reported on as the report is being printed, and allow the user to browse to the file?
    The environment is Crystal Reports XI, SP3, with ODBC connection to Sage Timberline Office data version 9.7.  The client names their Payroll unposted time file each pay period, and also needs to report on their posted data file, depending on the time period for the report.  The client will need to select both the date range and the file name.
    I have created a SQL statement in Add Command in Database Expert, which prompts for a file name, but it does not let you browse to select a file on the computer.
    Therefore, in the prompts when they print the report, the parameter offers the user a default file name similar to the name they currently use, so they only have to change the payroll period end date in the supplied file name to run the report successfully.
    The client is concerned that sometimes a user will name their data file differently, and not know how to input the file name into the Crystal report prompt at print time.
    My research on dynamic prompts showed you can link to fields inside the data record, but I did not see a way to dynamically link to select the actual files used in the report.
    Another question is that the naming convention used by the SQL query is different than the basic Windows file name, but I think I can handle that issue.
    The actual file name is typically similar to:
    04-10-11 BP NEW.PRT
    However, in the SQL query, the record ID looks like:
    PRT_00-00-00 BP NEW__TIME
    The SQL Statement using a parameter is:
    SELECT
    "PRT_CURRENT__TIME"."Employee",
    "EMPLOYEE1"."Employee_Name",
    "PRT_CURRENT__TIME"."Date",
    "PRT_CURRENT__TIME"."Units",
    "PRT_CURRENT__TIME"."Job",
    "JOB1"."BP_Emps_Used"
    FROM
    "PRT_CURRENT__TIME" AS "PRT_CURRENT__TIME"
    INNER JOIN "JCM_MASTER__JOB" AS "JOB1"
    ON "PRT_CURRENT__TIME"."Job"="JOB1"."Job"
    INNER JOIN "PRM_MASTER__EMPLOYEE" AS "EMPLOYEE1"
    ON "PRT_CURRENT__TIME"."Employee"="EMPLOYEE1"."Employee"
    WHERE "JOB1"."BP_Emps_Used" = 1
    AND
    ("PRT_CURRENT__TIME"."Date" BETWEEN
    {?As of Date} - 41 AND {?As of Date})
    UNION ALL
    ( SELECT
    "PRT_NEW__TIME"."Employee",
    "EMPLOYEE2"."Employee_Name",
    "PRT_NEW__TIME"."Date",
    "PRT_NEW__TIME"."Units",
    "PRT_NEW__TIME"."Job",
    "JOB2"."BP_Emps_Used"
    FROM
    "{?NEWPRT}" AS "PRT_NEW__TIME"
    INNER JOIN "JCM_MASTER__JOB" AS "JOB2"
    ON "PRT_NEW__TIME"."Job"="JOB2"."Job"
    INNER JOIN "PRM_MASTER__EMPLOYEE" AS "EMPLOYEE2"
    ON "PRT_NEW__TIME"."Employee"="EMPLOYEE2"."Employee"
    WHERE "JOB2"."BP_Emps_Used" = 1
    AND
    ("PRT_NEW__TIME"."Date" BETWEEN
    {?As of Date} - 41 AND {?As of Date})

    Hello,
    Sorry you'll have to contact Sage on how to do this. We can help you once you get connected but we can't help you get around their connection methods.
    There is no Preview Set Database Connection method you can use in CR Designer. The Designer assumes you select it first or use the Set Location option before previewing or refreshing the data.
    If you are doing this in the Sage program itself we can't help you, you'll have to contact Sage for assistance.
    Sage is an OEM Partner they are responsible for supporting their product and CR. If they have issues help you then they will contact us directly for assistance.
    Thank you
    Don

  • Outlook 2010 - Data File Properties/Folder Size verses Windows Explorer pst file size

    I am running Outlook 2010 32bit Version 14.0.6129.5000 on a Windows PC running Windows 7 Professional.  All updates from MS are up to date.
    I have several pst files I open with Outlook 2010. The size of the files displayed in Outlook are very different than what is displayed in Windows Explorer. 
    For example one of the pst file called "business.pst" when opened Outlook displays it under "Data File Properties -> Folder Size" that the Total Size (including subfolders) is 764,354 KB.  Windows Explorer says
    the file size is 1,190,417 KB.
    For some reason MS Outlook 2010 is displaying the wrong folder size.  Any ideas why this is the case?
    Thanks,
    Pat

    Outlook mailbox grows as you create and receive items. When you delete items, the size of the Outlook Data File (.pst and .ost) file might not decrease in proportion to the data that you deleted, untill it has been compacted.
    Normally, after you have deleted items from an Outlook Data File (.pst), the file will be automatically compacted in the background when you’re not using your computer and Outlook is running.
    For an exception, when the Outlook Data File (.pst) is somehow corrupt, the compaction might not finish correctly. So the size of the Outlook Data File (.pst) file might remain the same before compaction.
    To solve this, try run the
    scanpst to fix the Outlook Data File (.pst) file first, after that, we can
    manually start the compact command.
    When finished, compare the file size again.
    Max Meng
    TechNet Community Support

  • Copy file name and info from finder window

    I need to get the file name and file size for lots of files in lots of folders into a text file or excel file. I have found the way to copy the file path into text edit, but can't figure out how to copy all the info in a file window. Is there a way to do this?

    It's easily done with Terminal.
    cd (drag folder into Terminal window, press return). Then:
    ls -l > ~/Desktop/list.txt
    A file called "list.txt" will appear on your Desktop, containing a list of files from the folder, along with their modification dates, file sizes, and ownership/permission info.

  • Date, File name, User Stamp

    I've looked for hours for a script that will work in Illustrator CS4 that will put a text block on the page identifying:
    Date, File Name and the User's name on the document (art board)
    I am not versed in scripting. Can anyone help me out?

    Hi Y'all  This post is where scripting all started for me. At the time I didn't know anything and Muppet Mark kindly helped me out. Since then I have learned Applescript and written my slug script in that. This has the advantage of being able to access system information. Here it is:
    -- captures the user's first name from the system for use in dialog boxes
    set myName to (long user name of (system info))
    set sp to (offset of " " in myName)
    set myFirstName to text 1 thru (sp - 1) of myName
    -- captures the user's short name from the system for use in the slug
    set myShortName to (short user name of (system info))
    set myInitials to text 1 thru 2 of myShortName
    tell application "Adobe Illustrator" to set myDocCount to count of documents
    if myDocCount > 0 then
        tell application "Adobe Illustrator"
            try
                set myPath to file path of document 1 as alias -- if the document is not saved this will cause an error
            on error
                display dialog "You haven't saved document yet " & myFirstName & "." buttons {"Cancel", "Save"} default button 2
                -- brings up Save As dialog box
                tell application "System Events" to tell process "Adobe Illustrator"
                    click menu item "Save As..." of menu 1 of menu bar item "File" of menu bar 1
                    tell application "Finder"
                        set filesavepath to "/Volumes/Server/Folder" -- put your own file path in here
                    end tell
                    delay 0.2
                    tell window "Save As"
                        keystroke "g" using {command down, shift down}
                        repeat until exists sheet 1
                            delay 0.5
                        end repeat
                        tell sheet 1
                            keystroke filesavepath
                            click button "Go"
                        end tell
                    end tell
                end tell
                -- end Save As dialog box
                return
            end try
            set myFile to name of document 1
            set myFolder to myPath as string
            -- this section looks at the file info to get the file version information
            set myFileVersion to my getVersion(myPath)
            set AppleScript's text item delimiters to "Created"
            set myFileVersion to text item 1 of myFileVersion
            if myFileVersion is "Saved As v.15 " then
                set myFileVersion to "Illustrator CS5"
            else if myFileVersion is "Saved As v.14 " then
                set myFileVersion to "Illustrator CS4"
            else if myFileVersion is "Saved As v.13 " then
                set myFileVersion to "Illustrator CS3"
            else if myFileVersion is "Saved As v.12 " then
                set myFileVersion to "Illustrator CS2"
            end if
            set AppleScript's text item delimiters to {""}
            -- end of file info
            set AppleScript's text item delimiters to ":"
            set clientFolder to text item 3 of myFolder -- always picks out the client folder on our server
            set AppleScript's text item delimiters to {""}
            if clientFolder contains "_" then
                set clientFolder to my cleanName(clientFolder) -- to format the client names properly
            else
                set clientFolder to clientFolder
            end if
            -- find the current date. This is the date when the file was last edited but this method enables you to run the script before saving
            tell application "Finder"
                set myDate to (current date) as string
            end tell
            set myDate to (word 2 of myDate) & " " & (text 1 thru 3 of word 3 of myDate) & " " & (word 4 of myDate) as string
            -- end find date
            -- checks for the correctly named frames and then populates with the correct information
            if exists (text frame "titleblock-software" of document 1) then
                set contents of text frame "titleblock-software" of document 1 to myFileVersion
            end if
            if exists (text frame "titleblock-client" of document 1) then
                set contents of text frame "titleblock-client" of document 1 to clientFolder
            end if
            if exists (text frame "titleblock-file" of document 1) then
                set contents of text frame "titleblock-file" of document 1 to myFile
            end if
            if exists (text frame "titleblock-path" of document 1) then
                set contents of text frame "titleblock-path" of document 1 to myPath as string
            end if
            if exists (text frame "titleblock-date" of document 1) then
                set contents of text frame "titleblock-date" of document 1 to myDate
            end if
            if exists (text frame "titleblock-editor" of document 1) then
                set contents of text frame "titleblock-editor" of document 1 to myInitials
            end if
            if exists (text frame "titleblock-version" of document 1) then
                set versionCount to get contents of text frame "titleblock-version" of document 1
                set versionCount to (versionCount + 1)
                set contents of text frame "titleblock-version" of document 1 to versionCount
            end if
            -- end populating named frames
        end tell
    end if
    -- this handler is to get the version of the file
    on getVersion(added_item)
        set myFileVersion to long version of (info for added_item)
    end getVersion
    -- this handler converts every 'odd' character to an underscore, modify as needed
    on cleanName(newName)
        set chars to every character of newName
        repeat with i from 1 to length of chars
            if item i of chars as text is equal to "_" then
                set item i of chars to " "
            end if
        end repeat
        return every item of chars as string
    end cleanName

  • Contents of iPhoto 6 data files

    I'm trying to get a comprehensive understanding of the data structures of iPhoto 6. For the life of me I can't find any previous postings that explain all the files. In particular, I'd like to understand (high level) the contents of the following files:
    AlbumData.xml
    Dir.data
    iPhoto.ipspot
    iPhotoLock.data
    Library.data
    Library.iPhoto
    Library6.iPhoto
    Thumb32Segment.data
    Thumb64Segment.data
    ThumbJPGSegment.data
    I know that the Thumb*.data files have thumbnails in them, but don't know what the difference is between the three files. I'm assuming the iPhotoLock.data is a semaphore to lock out multiple accesses to these data files. And I have divined that AlbumData.xml contains the roll and album information for every photo, in addition to some other good stuff.
    What I understand the least are the three Library*.iPhoto files. In particular Library6.iPhoto is big (~100Mb for 6000 pics)--what's in there?!
    Thanks!
    iMac   Mac OS X (10.4.7)  
    iMac   Mac OS X (10.4.7)  

    So I want to know what they are so I can experiment with doing some of my photo management in iPhoto, and some directly through the file system. And I want to try to do things like delete Library6.iphoto but I don't want to spend an hour re-entering the dates for all of my rolls when iPhoto recreates the rolls with the weird dates that it grabs.<<<</div>
    That's what I had expected. Don't tamper with the iPhoto library unless you enjoy bleeding ulcers. The following is pasted from the iPhoto "Help" screens:
    IMPORTANT: If you move, delete, rename, or otherwise touch files or folders within this folder, you may be unable to see your pictures in the iPhoto application.
    What is it that you want to do that you're unable to do from INSIDE iPhoto?

  • Outlook 2010 - Cannot Add or Create a Data File

    I am trying to Add both an existing Data File (.pst) into Outlook to review the emails, and add a New Email Account with a separate Data File from the Default Outlook.pst File.
    Setup is Windows 7 Pro x64 - Office 2010 Standard
    I already have 5 Email Accounts setup, each with their own Data File, so this did work in the beginning.
    I have tried opening Account Settings/Data Files, clicking on 'Add' and get this error - "An unknown error occurred, error code 0x80070003"
    When I add a New Email Account and leave the 'New Outlook Data File' selection checked for the 'Deliver new messages to:' option I get the same above error.
    When I add a New Email Account and select 'Existing Outlook Data File' for the 'Deliver new messages to:' option and click on the 'Browse' button nothing happens.  I can add the path though in the field and that will create the New Account using that
    Data File.
    Last related issue is if I want to change the Data File for an Existing Account to a New Outlook Data File.  I open Account Settings/Email, Click on an existing account, click on the 'Change Folder' button, then click the 'New Outlook Data File' button
    and nothing happens.
    Kirk

    Hi
    I had the same problem for an end user and I did all of the above:
    1)Reset My Documents to default
    2)Repaired Office\Outlook installation
    3)Uninstalled\Reinstalled Office\Outlook with reboots in between installations
    4)Cleared the user's previous settings
    This finally resolved my issue
    http://autocad.autodesk.com/?nd=blogs&post_id=136093&blog_id=62
    Creating a registry setting
    "HKCU\Software\Microsoft\Office\14.0\Outlook\ForcePSTPath"
    and forcing PST to be created  in the default path
    "C:\Users\username\AppData\Local\Microsoft\Outlook"
    Hope this helps
    MsHarvey

  • Space allocation on 11g R2 on multiple data files in one tablespace

    hello
    if the following is explained in Oracle 11g R2 documentation please send a pointer. I cant find it myself right now.
    my question is about space allocation (during inserts and during table data load) in one table space containing multiple data files.
    suppose i have Oracle 11g R2 database and I am using OMF and Oracle ASM on Oracle Linux 64-bit.
    I have one ASM disk group called ASMDATA with 50 ASM disks in it.
    I have one tablespace called APPL_DATA with 50 data files on it, each file = 20 GB (equal size), to contain one 1 TB table calll MY_FACT_TABLE.
    During Import Data Pump or during application doing SQL Inserts how will Oracle allocate space for the table?
    Will it fill up one data file completely and then start allocating from second file and so on, sequentially moving from file to file?
    And when all files are full, which file will it try to autoextend (if they all allow autoextend) ?
    Or will Oracle use some sort of proportional fill like MS SQL Server does i.e. allocate one extent from data file 1, next extent from data file 2,.... and then wrap around again? In other words it will keep all files equally allocated as much as possible so at any point in time they will have approximately the same amount of data in them (assuming same initial size ?
    Or some other way?
    thanks.

    On 10.2.0.4, regular data files, autoallocate, 8K blocks, I've noticed some unexpected things. I have an old, probably obsolete habit of making my datafiles 2G fixed, except for the last, which I make 200M autoextend max 2G. So what I see happening in normal operations is, the other files fill up in a round-robin fashion, then the last file starts to grow. So it is obvious to me at that time to extend the file to 2G, make it noautoexented, and add another file. My schemata tend to be in the 50G range, with 1 or 2 thousand tables. When I impdp, I notice it sorts them by size, importing the largest first. I never paid too much attention to the smaller tables, since LMT algorithms seem good enough to simply not worry about it.
    I just looked (with dbconsole tablespace map) at a much smaller schema I imported not long ago, where the biggest table was 20M in 36 extents, second was 8M in 23 extents, and so on, total around 200M. I had made 2 data files, the first 2G and the second 200M autoextend. Looking at the impdp log, I see it isn't real strong about sorting by size, especially under 5M. So where did the 20M table it imported first end up? At the end of the auotextend file, with lots of free space below a few tables there. The 2G file seems to have a couple thousand blocks used, then 8K blocks free, 5K blocks used, 56K blocks free, 19K blocks used, 148K free (with a few tables scattered in the middle of there), 4K blocks used, the rest free. Looking at an 8G similar schema, looks like the largest files got spread across the middle of the files, then the second largest next to it, and so forth, which is more what I expected.
    I'm still not going to worry about it. Data distribution within the tables is something that might be important, where blocks on the disk are, not so much. I think that's why the docs are kind of ambiguous about the algorithm, it can change, and isn't all that important, unless you run into bugs.

  • LabVIEW created .DAT file with saving only new data on existing files

    I can successfully create and save a .DAT file using LabView and Diadem.   The save command I am using (in Labview) only overwrites existing files, I would like it to save only the new data from the last save.  Overall I would like to save the data every 1hr and since my test will last for many hours, the further along in the test the longer it would take to save the full file.  I know Diadem has the capability to save in such a manner, so I have thought about after I write to Diadem have a small Diadem application save periodically, but I haven't found a way to save the .DAT file, with out any signals coming into the save function.  I have attached the example .DAT file creater and saver I am using to figure it out.  thanks for your help!

    Howdy New Guy -
    I am not sure if you are looking to append to a file, or if you would
    like to overwrite a file.  In either case, you can take a look at
    the "Append To File.vi" example for DIAdem. 
    Either way, you may be experiencing problems because you are only using
    the "DIAdem Simple File Write.vi" (found on the DIAdem >> DAT
    Files palette).  Instead, you can use the "DIAdem File Write.vi"
    (located on teh DIAdem >> DAT Files >> DAT Files Advanced
    palette).  This VI can be set to overwrite or append to file.
    I hope this helps in your development!
    Andrew W. || Applications Engineer

Maybe you are looking for

  • Xml help with project?

    Hey guys, Java newbie here, was wondering if anyone could point me in the right direction. so my co-worker gave me a folder with a bunch of .tif image files and one .xml file. I'm supposed to write Java code to copy a folder with the same name as the

  • Nikon D7000 & lenses missing?

    Hi, all!  Just imported new images from my Nikon D7000 into LR3.3; the camera type is recognized, but the lens (the kit 18-105mm VR 3.5/5.6) isn't there.  Downloaded the profile downloader, and that camera/lens combo wasn't in there.  Downloaded the

  • IPod 5th Gen sync

    Seems my issue is a little different from others. When I plug my nano in to sync, it shows in iTunes under devices and syncs up fine and charges the battery. However, it doesn't show in Finder or on the desktop. It also doesn't show in the disk utili

  • Problems synching exchange calender with android 5.0

    I've set my phone to automatically sync my work calendar using Exchange Active Sync. However, meetings that I add on my phone or on my computer are not synced even though the phone says it has synced. When I do a manual sync everything works fine. I

  • Showing Geotags on iPhone from windows live photo gallery

    I use Windows live photo gallery to geotag my photos (along with other details such as face and descriptive tags), however when i try and veiw them on my iphone they dont appear on the map. is there anyway to get them to come up? if not any suggests