Data storage format by berkeley DB

Hi,
I have a few questions regarding how data storage is achieved by Berkeley db.
I created a Berkeley DB database called 'BDB' by inserting 1million <key,value> pairs. The environment name was set to 'BENV'; that means, I had to create a directory named 'BENV' in the current working directory before executing the program.
After invoking the program and completing the insertions, I saw a file 'BDB' in the 'BENV' directory. The size of 'BDB' is 56Mb. Further, there are 4 more files: __db.001 ... __db.004. Sizes of these files are 4Kb, 320Kb, 1.9Mb and 608Kb.
I should mention that I didn't enable logging when I created the database and environment.
My question is: why are there so many files and what are their purpose? I have a feeling that the files '__db.00x' (where x=1 to 4) also store data. Coz, I created two more databases with 10million and 200million records.. and they have these files too, just that their sizes have increased quite a bit.
I am trying to understand more details about the data storage layout of Berkeley DB and I would appreciate any help regarding this.
Thanks.
Andor

Are you using the STL interface? We have code that uses the raw C++ interface and this does not create those files starting with '__'. Whereas the code that uses STL interface creates such files. I have not bothered much now as I need those files during the process lifecycle. My use of Berkeley DB here is more like a disk backed set rather than a true persistent database.
I would try running a checkpoint() to see if they get into a single file before closing the DBEnv. Let me try as I have a sample code using DB STL interface.

Similar Messages

  • Data storage format

    Hi all,
    I am sure what I am about to ask is a fairly common question, but I can't seem to find a sensible answer on the forum, so here goes: (apologies for the long post, I am trying to be thorough)
    What is the best format to store data in?
    Of course there are dozens of answers, and in reality it depends on the application. So without further ado, here is some additional information:
    The data is logged from a DAQ device, at rates up to 10 kHz, several channels, and for a few hours. So the data files can potentially grow up to several hundred megs big (typically if the data acquisition period is more than a few minutes then the sampling rate will be limited to 1 kHz).
    At the moment, I am collecting data and writing it, on the fly, as text to a file. As mentioned, the files can get up to a few hundred megs big. This file is then visualised in MATLAB, and can take several minutedson my rusty Pentium IV to display, but MATLAB manages to do the necessary file reading/memory allocation/graphing without too much problem.
    I wish to remove the MATLAB requirement, so I decided to create a viewer application in LabVIEW.
    I know that text format is quite inefficient, and so I moved over to a binary format in the flavour of TDM (not TDMS, I am using LV 7.11) and this is where the trouble starts. Firstly, doing on the fly writes to a TDM file caused my PC memory usage to climb unbounded! This was solved using the TDM header VIs which someone on the forum suggested. The next problem comes when trying to view the data. I have a separate viewer application, which is just a simple read from TDM file and plot to graph. Wow! The performance is horrible!!! As TDM does not, as far as I can tell, allow only portions of the data to be viewed the whole application basically hangs as windows tries to allocate several gigabytes of memory to process my 200 MB data file!!! So thats a dead end there...
    I have read about hierachical waveform storage (HWS), and also know that streaming TDM (TDMS)exists in newer versions of LV.
    Can anyone provide any insight into which of these formats would be more suitable for my application. I quite like the TDM format, it seems to provide most of what I require except the performance, so I am hoping TDMS will solve most of these. I don't know that much about HWS. Another reason I want to use TDM(s) is that it integrates well with Diadem.
    Any advice??
    Thanks to anyone that took the time to read this far, more thanks to anyove who replies!
    nrp
    CLA

    First let me say that if yo really do have a "...rusty Pentium IV..." the first thing you might want to do is invest in a good dehumidifier!
    But to your main point, the issue that you mention in relation to TDM files is a known problem. One alternative would be to save the file in binary form rather than ASCII. The conversion to and from this format isn't hard and as of V7.1 I believe there were still example VIs that illustrated streaming DAQ data to a binary file.
    Another thing to consider though is wheter or not you really need all that data. For example, if you know that in your application changes of less than 1% are not significant, you can process the data before saving it to add timestamps to the datapoints and delete any values that vary from the previous starting point by less than 1%. To see how this would work, let's say that the input voltage at the start of the test is 100 V (to keep the math simple). As long as the voltage did not go higher than 101 V or below 99 V you would not save another datapoint. When the signal does exceed that limit, a new timestamped datapoint would be saved, and a new set of limits would be calculated using this datapoint.
    Depending upon your application, this approach can sometimes be very valuable because it allows you to sample very fast, but not have to save tons of essentially repetative data.
    Mike...
    Certified Professional Instructor
    Certified LabVIEW Architect
    LabVIEW Champion
    "... after all, He's not a tame lion..."
    Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First Steps

  • I need a memory management/ data storage recomendation for my IMac 2 GH Intel Core 2 Duo

    Since giving my mid 2007 IMac a 2GB memory boost to accommodate Lion all has been well however my memory is full. I have a sizable ITunes library and 6000 photos in IPhoto, Me thinks I should store this all more effectively for the safety of the music and photos and for the well-being of the computer....but there seems to be choices. Is this where ICloud comes into play or time capsule or should I just get an external mini hard drive. Can anyone help me with some pearls of wisdom with data storage.

    Greetings John,
    There are two types of memory which you mention here.
    The 2 GB memory you refer to is RAM.  It is not used for storing information.  Rather for giving the computer a place to do much of its work.
    File storage is handled by the hard drive of the computer.
    If your available hard drive space is getting low you can move larger files to a Mac Formatted external drive:
    Faster connection (FW 800) drives:
    http://store.apple.com/us/product/H2068VC/A
    http://store.apple.com/us/product/TW315VC/A
    http://store.apple.com/us/product/H0815VC/A
    Normal speed (USB) drives:
    http://store.apple.com/us/product/H6581ZM/A
    Larger files can include entire databases like iTunes, iMovie, or iPhoto.
    Keep in mind that if you move these items to an external drive you will have to have the drive plugged in and powered on to access the data on them.  In addition, if you move important information off your internal drive to an external, you should be sure that your backup solution is backing up that external drive to keep your information safe.
    iCloud is not a file storage solution and TimeCapsule is not suited for storing databases like those mentioned above (its meant primarily as a backup solution).  I would stick with an external drive (1 to hold your big files and another one big enough to backup both your computer and the first drive).
    Here are some other general computer clean up suggestions: http://thexlab.com/faqs/freeingspace.html.
    Hope that helps.

  • Windows 7 64bit Raid 1 Data Storage setup with UEFI and 6TB drives

    i just bought a GigaByte MB with UEFI bios, 1 SSD 128GB drive for the OS (boot up), and 2 6TB drives for use in a raid 1 setup for the data storage.
    I have been able to install win 7 64bit using the UEFI dvd drive.  I have been able to define the 6TB mirror via the bios (under the Intel Rapid Storage Technology option) .  But when ever i go into windows, and check under Disk Management, i don't
    see the raid 1, i see two drives listed for the data, each at 1492 GB.   I should be seeing 1 drive at 5.4TB.  
    note: I have tried re-installing win7 64bit many times to see if i have missed something.   When i run setup off uefi dvd drive, i see that drive 1 (of what should be part of the mirror) and drive 2 are listed there at the reported 1492 GB size each.
    So it seems that win7 doesn't see that mirror that i created via the bios. So is this a bios setting problem somewhere?   i presume on this screen i should see the correct mirror size already.
    Also, do i need to install win7 via the uefi dvd if  i don't want to boot off those large drives (i just want to boot off the ssd. But when i did try that using a legacy mode, i couldn't later try to create a single large drive for the mirror using
    GPT formatting.  it just wasn't an option.  although the mirror was recognized as 1 drive, but broken into 2TB and 3TB.)
    what could i possibly be doing wrong??  
    Thanks for any help

    Thank You Roger.   I did call Gigabyte.   This is how i solved it with the gigabyte support team. 
    1.   Don't try installing Win7 64bit sp1 and setup RAID at the same time.  (even go ahead and unplug those raid drives if you have it plugged in).   The OS should be installed first on the boot drive, but make sure the BIOS is already set
    to RAID (not AHCI.  If you already installed OS under AHCI, you need to reinstall OS.  i find this crazy because what if you decide to install raid down the road.. to an existing machine.. you can't do that because of this).     Make sure
    to install OS with the UEFI dvd drive option (i had done this).
    2. Once the OS is installed, install all drivers that come for the Motherboard... That should include intel Raid drivers. (note gigabyte support didn't tell me to do this, but i didn't want to take a chance to have to reinstall OS again or have more problems,
    but it makes sense.)
    3. Now i have the GA-H97-Gaming 3 mother board, it had Bios F3.  I upgraded it to F5 as per support tech.
    4. turn off machine, and replug in the two 6TB drives.
    5. after reboot, i find out that i no longer see Intel RAPID Storage Technology. (this should have displayed when i set the SATA Mode Selection to RAID before i started all this business... it did under bios F3, but not under F5.. bug??  i think so,
    so i plan on letting gigabyte know about this).  This is a problem.  In Intel RAPID Storage Technology is where you define the RAID in the first place.    OK, well after several reboots.. i tried to use CTRL-I to get to intel's interface
    rather than that of the gigabyte bios.   Voila... i define the Mirror... 
    6. I am able to boot into OS.   I got to Drive Management.  I now see 1 6TB drive.  I initialize it of course using GPT.  Format it.  All all looks good now.   
    Hope this helps someone.. took me enough tries to get it right.   I really am bothered about requiring the OS installed in the correct SATA mode just to get RAID working.  As i mentioned, you can't change your mind afterwards without reinstalling
    OS.   Also i couldn't even get the OS setup to start when i had those raid drives plugged in.. so make sure to unplug them.
    Best Regards.

  • Use an external HD with Airport Extreme to access it for data storage from computers  connected to the WiFi network?

    Can I connect My Passport HD into my Airport Extreme's USB port to access it from my computers (27" iMac 2012 now using Yosemite) on that Wifi network for data storage and retrieval purposes?

    Yes, if the AirPort base station's hardware supports 802.11n. The drive needs to be formatted as Mac OS Extended (Journaled), FAT16, or FAT32, and may need to be connected through an USB hub with its own power supply.
    (117777)

  • How to change data storage in Planning?

    I want to change the data storage property of a base member in Planning from Shared to Store. When I cilcked on EDIT, I don't see the option for data storage. Where should i go?

    I remember you posted questions about using Outline Load Utility. You can use OLU to update data storage properties. In your csv file have the following format:
    Parent     Account     Data Storage
    Assets     Asset1     Never Share
    Just an option to updating members manually if you have a lot of members to modify...

  • XML binary storage format impairs schema validation?

    I'm using Oracle 11g R1 on Windows Server 2003. I successfully registered schemas and created tables and indexes in the new binary storage format. However, when trying to load data, I'm running into problems. Schema validation behaves as if not the full feature set of XML Schema mysteriously isn't supported anymore.
    There is probably more but at least wildcard elements (xs:any) and element references (xs:element ref="STH") are simply ignored in the schema definition and data is rejected even when it conforms to the schema.
    Is there any solution for this or am I out of luck? I wanted to go back to CLOB storage as used in a previous installation but I'm running into problems when registering the schema. It complains about an empty SQL name although I don't have any defined. I'm pretty weirded out by all this.
    I created the schema and table in a straightforward way:
    begin
      dbms_xmlschema.registeruri(
        schemaurl => 'http://www.xxxhello.com/archive_tsd.xsd',
        schemadocuri => '/public/user/archive_tsd.xsd',
        gentypes => FALSE,
        options => DBMS_XMLSCHEMA.REGISTER_BINARYXML
    end;
    CREATE TABLE archive OF xmltype XMLTYPE STORE AS binary xml XMLSCHEMA
    "http://www.xxxhello.com/archive_tsd.xsd" ELEMENT "CompleteDocument";
    create index idx_lastmodified_archive on archive t
    (extractvalue(VALUE(t),'/CompleteDocument/DocContent/LastModified'));Because of xs:any or element references is ignored, I get errors like
    LSX-00213: only 0 occurrences of particle "REFDELEM", minimum is 1.
    Thanks for your help.

    The schema is very large (>200kb). Where should I upload it or can I send it to you? I'm bit concerned about confidentiality of company data. However, the instance is not valid yet. I'm in the process of modifying the schema to match all instances, but it breaks on places that should be already okay. No, I didn't use SchemaValidate ever.
    But I've made an example where at least xs:any doesn't work. Element references work, though.
    Sample schema:
    <?xml version = "1.0" encoding = "UTF-8"?>
    <xs:schema
      xmlns:tsd = "http://namespaces.softwareag.com/tamino/TaminoSchemaDefinition"
      xmlns:xs = "http://www.w3.org/2001/XMLSchema">
      <xs:annotation>
        <xs:appinfo>
          <tsd:schemaInfo name = "sbreak">
            <tsd:collection name = "sbreak"></tsd:collection>
            <tsd:doctype name = "CompleteDocument">
              <tsd:logical>
                <tsd:content>open</tsd:content>
                <tsd:systemGeneratedIdentity reuse = "false"></tsd:systemGeneratedIdentity>
              </tsd:logical>
            </tsd:doctype>
            <tsd:adminInfo>
              <tsd:server>4.4.1.1</tsd:server>
              <tsd:modified>2007-07-03T16:00:46.484+02:00</tsd:modified>
              <tsd:created>2007-07-03T15:29:04.968+02:00</tsd:created>
              <tsd:version>TSD4.4</tsd:version>
            </tsd:adminInfo>
          </tsd:schemaInfo>
        </xs:appinfo>
      </xs:annotation>
      <xs:element name = "CompleteDocument">
        <xs:complexType>
          <xs:choice minOccurs = "0" maxOccurs = "unbounded">
            <xs:element name = "ComplexNormal">
              <xs:complexType>
                <xs:choice minOccurs = "0" maxOccurs = "unbounded">
                  <xs:element name = "NormalElem1" type = "xs:string"></xs:element>
                  <xs:element name = "NormalElem2" type = "xs:string"></xs:element>
                </xs:choice>
              </xs:complexType>
            </xs:element>
            <xs:element name = "ComplexAny">
              <xs:complexType>
                <xs:choice minOccurs = "0" maxOccurs = "unbounded">
                  <xs:any minOccurs = "0" maxOccurs = "unbounded"></xs:any>
                </xs:choice>
              </xs:complexType>
            </xs:element>
            <xs:element name = "ComplexRef">
              <xs:complexType>
                <xs:choice minOccurs = "0" maxOccurs = "unbounded">
                  <xs:element ref = "RefdElem"></xs:element>
                </xs:choice>
              </xs:complexType>
            </xs:element>
            <xs:element name = "LastModified" type = "xs:string"></xs:element>
          </xs:choice>
        </xs:complexType>
      </xs:element>
      <xs:element name = "RefdElem">
        <xs:complexType>
          <xs:choice minOccurs = "0" maxOccurs = "unbounded">
            <xs:element name = "Elem1" type = "xs:string"></xs:element>
            <xs:element name = "Elem2" type = "xs:string"></xs:element>
          </xs:choice>
        </xs:complexType>
      </xs:element>
    </xs:schema>Sample instance:
    <?xml version="1.0" encoding="UTF-8" ?>
    <CompleteDocument>
         <ComplexNormal>
              <NormalElem1>Test1</NormalElem1>
              <NormalElem2>Test2</NormalElem2>
         </ComplexNormal>
         <ComplexAny>
              <AnyElem>Test3</AnyElem>
         </ComplexAny>
         <ComplexRef>
              <RefdElem>
                   <Elem1>Test4</Elem1>
                   <Elem2>Test5</Elem2>
              </RefdElem>
         </ComplexRef>
    </CompleteDocument>Log of what I did. First I confirmed, that I could enter the instance using clob storage:
    SQL> begin
      2    dbms_xmlschema.registeruri(
      3      schemaurl => 'http://www.xxxhello.com/sbreak_tsd.xsd',
      4      schemadocuri => '/public/sbreak_tsd.xsd'
      5    );
      6  end;
      7  /
    PL/SQL-Prozedur erfolgreich abgeschlossen.
    SQL> CREATE TABLE sbreak OF xmltype XMLTYPE STORE AS clob XMLSCHEMA
    "http://www.xxxhello.com/sbreak_tsd.xsd" ELEMENT "CompleteDocument";
    Tabelle wurde erstellt.
    SQL> create index idx_lastmodified_sbreak on sbreak t (extractvalue(VALUE(t),
    '/CompleteDocument/LastModified'));
    Index wurde erstellt.
    SQL> insert into sbreak values(xmltype(bfilename('DATA', 'sbreak/sbreakinstance.xml'),
    NLS_CHARSET_ID('AL32UTF8')));
    1 Zeile wurde erstellt.Then I deleted table and schema again:
    SQL> drop index idx_lastmodified_sbreak;
    Index wurde gelöscht.
    SQL> drop table sbreak;
    Tabelle wurde gelöscht.
    SQL> begin
      2    dbms_xmlschema.deleteschema(
      3      schemaurl => 'http://www.xxxhello.com/sbreak_tsd.xsd'
      4     ,delete_option => dbms_xmlschema.delete_cascade_force
      5    );
      6  end;
      7  /
    PL/SQL-Prozedur erfolgreich abgeschlossen.After that I created schema and table with binary XML storage and tried to insert the same instance again:
    SQL> begin
      2    dbms_xmlschema.registeruri(
      3      schemaurl => 'http://www.xxxhello.com/sbreak_tsd.xsd',
      4      schemadocuri => '/public/sbreak_tsd.xsd',
      5      gentypes => FALSE,
      6      options => DBMS_XMLSCHEMA.REGISTER_BINARYXML
      7    );
      8  end;
      9  /
    PL/SQL-Prozedur erfolgreich abgeschlossen.
    SQL> CREATE TABLE sbreak OF xmltype XMLTYPE STORE AS binary xml XMLSCHEMA
    "http://www.xxxhello.com/sbreak_tsd.xsd" ELEMENT "CompleteDocument";
    Tabelle wurde erstellt.
    SQL> create index idx_lastmodified_sbreak on sbreak t (extractvalue(VALUE(t),
    '/CompleteDocument/LastModified'));
    Index wurde erstellt.
    SQL> insert into sbreak values(xmltype(bfilename('DATA', 'sbreak/sbreakinstance.xml'),
    NLS_CHARSET_ID('AL32UTF8')));
    insert into sbreak values(xmltype(bfilename('DATA', 'sbreak/sbreakinstance.xml'),
    NLS_CHARSET_ID('AL32UTF8')))
    FEHLER in Zeile 1:
    ORA-31011: XML-Parsing nicht erfolgreich
    ORA-19202: Fehler bei XML-Verarbeitung
    LSX-00021: undefined element "AnyElem"
    aufgetretenSorry about the non-english text, but I think it can be guessed easily what's going on. Next I'll try a modifed schema without the tsd namespace added by the schema editor I use (the original large schema has been migrated from the Tamino XML database).

  • How to acces and display datas storaged in cache for a SUP 2.0 workflow?

    HI to all.
    I have an application with a item menu which obtains data thought a online request. the result is shown is a listview.
    My problem is when my BlackBerry has no conection ( offline scenario). When I select the menu item, I obtain an error.
    How to acces and display datas storaged in cache for my MBO? I have read that I can use getMessageValueCollection in custom.js to access to my datas but once I get the datas, How can associate those datas to a Listview like a online request?? Do i have to develop my own screen in html or how?
    Thanks.

    I'm not entirely clear on what you mean by "cache" in this context.  I'm going to assume that what you are really referring to is the contents of the workflow message, so correct me if I'm wrong.  There is, in later releases, the ability to set an device-side request cache time so that if you issue an online request it'll store the results in an on-device cache and if you subsequently reissue the same online request with the same parameter values within that timeout period it'll get the data from the cache rather than going to the server, but my gut instinct is that this is not what you are referring to.
    To access the data in the workflow message, you are correct, you would call getMessageValueCollection().  It will return an object hierarchy with objects defined in WorkflowMessage.js.  Note that if your online request fails, the data won't magically appear in your workflow message.
    To use the data in the workflow message to update a listview, feel free to examine the code in the listview widgets and in API.js.  You can also create a custom listview as follows:
    function customBeforeNavigateForward(screenKey, destScreenKey) {
         // In this example, we only want to replace the listview on the "My Approvals" screen    
         if (destScreenKey == 'My_Approvals'){
              // First, we get the MessageValueCollection that we are currently operating on
              var message = getCurrentMessageValueCollection();
              // Next, we'll get the list MessageValue from that MessageValueCollection
              var itemList = message.getData("LeaveApprovalItem3");
              // Because its a list, the Value of the MessageValue will be an array
              var items = itemList.getValue();
              // Figure out how many items are in the list
              var numOfItems = items.length;
              // Iterate through the results and build our list
              var i = 0;
              var htmlOutput = '<div><ul data-role="listview" data-theme="k" data-filter="true">';
              var firstChar = '';
              while ( i < numOfItems ){
                   // Get the current item. This will be a MessageValueCollection.
                   var currItem= items<i>;
                   // Get the properties of the current item.
                   var owner = currItem.getData("LeaveApprovalItem_owner_attribKey").getValue();
                   var type = currItem.getData("LeaveApprovalItem_itemType_attribKey").getValue();
                   var status = currItem.getData("LeaveApprovalItem_itemStatus_attribKey").getValue();
                   var startDate = currItem.getData("LeaveApprovalItem_startDate_attribKey").getValue();
                   var endDate = currItem.getData("LeaveApprovalItem_endDate_attribKey").getValue();
                   // Format the data in a specific presentation
                   var formatStartDate = Date.parse(startDate).toString('MMM/d/yyyy');
                   var formatEndDate = Date.parse(endDate).toString('MMM/d/yyyy');
                   // Decide which thumbnail image to use
                   var imageToUse = ''
                        if (status == 'Pending'){
                             imageToUse = 'pending.png';
                        else if (status == 'Rejected'){
                             imageToUse = 'rejected.png';
                        else {
                             imageToUse = 'approved.png';
                   // Add a new line to the listview for this item
                   htmlOutput += '<li><a id ="' + currItem.getKey() + '" class="listClick">';
                   htmlOutput += '<img src="./images/' + imageToUse + '" class="ui-li-thumb">';
                   htmlOutput += '<h3 class = "listTitle">' + type;
                   htmlOutput +=  ' ( ' + owner + ' ) ';
                   htmlOutput += '</h3>';
                   htmlOutput += '<p>' + formatStartDate + ' : ' + formatEndDate + '</p>';
                   htmlOutput += '</a></li>';
                   i++;
              htmlOutput += '</ul></div>';
              // Remove the old listview and add in the new one.  Note: this is suboptimal and should be fixed if you want to use it in production.
              $('#My_ApprovalsForm').children().eq(2).hide();
              $('#My_ApprovalsForm').children().eq(1).after(htmlOutput);
              // Add in a handler so that when a line is clicked on, it'll go to the right details screen
              $(".listClick").click(function(){
                   currListDivID = $(this).parent().parent();
                   $(this).parent().parent().addClass("ui-btn-active");
                   navigateForward("Request_Details",  this.id );
                   if (isBlackBerry()) {
                        return;
         // All done.
         return true;

  • Data storage, add name to single value of a channel

    Good day to all.
    In my application i use data storage VI's to enable data collecting abilities. I have multiple test wich i have to run on a board, every single test give an integer value as output. I use a TDM file where i put all the value, coming from different board, in a single channel. Channel are already divided in group wich represent the type of the board were i run the test (part number). All the board have a serial number. Later, when i need to review the collected data, i need to know the value given by a single board, identify by it's serial number. So i need to add an identificator, maybe the serial number, to the value i store in the TDM file. How can i do this?
    Thank you, Francesco.

    A string can't be channel data.  So what you might have to do is make the serial number a metadata name and the value of that metadata the index of that serial number.
    To be honest, I don't think TDMS is the right format for your data.  TDMS is great for waveforms.  But single data points, not so much.  What you really need is a database.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • Storage format for +400k Samples to exchange with linux/octave

    I collect data with PXI system (LV2009, XP), now some colleagues want to analyze data with octave/linux.
    A typical set is up to four channels, in total about +400k 16bit samples and there are many of them, so I would prefer a binary format.
    Any suggestions what LV storage format ease the transfer?
    Greetings from Germany
    Henrik
    LV since v3.1
    “ground” is a convenient fantasy
    '˙˙˙˙uıɐƃɐ lɐıp puɐ °06 ǝuoɥd ɹnoʎ uɹnʇ ǝsɐǝld 'ʎɹɐuıƃɐɯı sı pǝlɐıp ǝʌɐɥ noʎ ɹǝqɯnu ǝɥʇ'

    For me the task is still open. I work with octave 3.6.2. I tried several "fread()" command variations. But I'm somehow lost.
    It seems to me that there is a leading 16bit number in the exported little-endian binary export file that separates different data sets.
    E.g. if I write the array:
    0  1    2    3   4   5   6   7
    0  11  22 33  44 55  66 77
    to disk, it seems that LabView writes:
    0(16bit) 0  1    2    3   4   5   6   7
    0(16bit) 0  11  22 33  44 55  66 77
    Hence, if I give the octave commands:
    [val, count] = fread(fid, [1,1], "single",0 , "ieee-le");
    val
    [val2, count] = fread(fid, [8,1], "double",0 , "ieee-le");
    val2'
    [val, count] = fread(fid, [1,1], "single",0 , "ieee-le");
    val
    [val3, count] = fread(fid, [8,1], "double",0 , "ieee-le");
    val3'
    the output is, what I expect.
    Is there a chance to avoid this extra delimiter numbers?
    Or where is my mistake?
    Greatings from Yverdon
    Stefan
    Attachments:
    NI-forum.7z ‏33 KB

  • Hdd showing as removeable data storage

    just done a fresh format and install of windows. everything was fine untill i installed the nf3 drivers for my mobo. now windows is telling me that my hdd is a removable data storage device. any advice?

    there is no problem with this.Nvidia's driver has increese the prerformance to a  lots of  configirations so far.Microsoft generic driver witch coming with Windows has not having the capabilyty to recornize all the fuctions of the Nforce chipset.One of these is to hot plug and unplug of Sata drives exept of the drive that has the operation system.I have to mansion that there are people that had problem with the driver related whith the opticals drives and burning softwarer.Finaly Microsoft generic driver is of 2002 and Nforce is always geting updated

  • My iphone 4 has no data storage. I have deleted all apps, music and texts and it still says I only have 1.9 GB available. There is nothing in my storage.

    My iphone 4 has no data storage. I have deleted all apps, music and texts and it still says I only have 1.9 GB available. There is nothing in my storage.

    The problem could be that you don't have enough storage on the device itself, not on iCloud.  Go to Settings>General>Usage to see how much "Storage" you have available on the device.  Farther down the list is the available storage on iCloud.
    Also check:
    Go to Settings>iCloud>Storage & Backups>Manage Storage; there, tap the device you need info on and the resulting screen lists Backup Options with which apps store data on iCloud as well.  Tap Show All Apps to get the complete list of apps and MB used for backup storage.
    A device needs many MB of storage in order to perform a backup to iCloud.
    Also see:  http://support.apple.com/kb/ht4847`

  • TREX - Configuring Distributed Slave with Decentralized Data Storage

    I am creating a distributed TREX environment with decentralized data storage with 3 hosts.  The environment is running TREX 7.10 Rev 14 on Windows 2003 x64.  These are the hosts:
    Server 01p: 1st Master NameServer, Master Index Server, Master Queue Server
    Server 02p: 2nd Master NameServer, Slave Index Server
    Server 03p: Slave NameServer, Slave Index Server (GOAL; Not there yet)
    The first and second hosts are properly set up, with the first host creating the index and replicating the snapshot to the slave index server for searching.  The third host is added to the landscape.  When I attempt to change the role of the third host to be a slave for the Master IS and run a check on the landscape, I receive the following errors:
    check...
    wsaphptd03p: file error on 'wsaphptd03p:e:\usr\sap\HPT\TRX00\_test_file_wsaphptd02p_: The system cannot find the file specified'
    wsaphptd02p: file error on 'wsaphptd02p:e:\usr\sap\HPT\TRX00\_test_file_wsaphptd03p_: The system cannot find the file specified'
    slaves: select 'Use Central Storage' for shared slaves on central storage or change base path to non shared location
    The installs were all performed in the same with, with storage on the "E:" drive using a local install on the stand-alone installation as described in the TREX71InstallMultipleHosts and TREX71INstallSingleHosts guides provided.
    Does anybody know what I should try to do to resolve this issue to add the third host to my TREX distributed landscape?  There really weren't any documents that gave more information besides the install documents.
    Thanks for any help.

    A ticket was opened with SAP customer support.  The response to that ticket is below:
    Many thanks for the connection. We found out, that the error message is wrong. It can be ignored, if you press 'Shift' and button 'Deploy' (TREXAdmin tool -> Landscape Configuration).  We will fix this error in the next Revision (Revision 25) for TREX 7.1.

  • File path of open data storage

    Hello all!
    Now I'm using the blocks of open data storage, write data and close data storage for storing and extracting result data. For the file path issue, before I
    set the data path by double clicking the "open data storage" block and inserting the file location in the indicated place, and that worked!
    Now since I made a stand alone application of this program and shall use it in other computers, the file location I inserted in open data storage block isn't
    valid any more in other PCs. So I modified my source code by connecting a "current vi path" to the open data storage block's file path node instead of
    inserting it inside the block, and this doesn't work! During running there shows an error in the write data block saying that the storage refnum isn't valid!
    I'm wondering why I couldn't specify the file path like this. Any way to allow me to specify the file path as the current vi path?
    Thanks!
    Chao
    Solved!
    Go to Solution.

    You need to account for the path changes when built in to an application, have a look at this example.
    https://decibel.ni.com/content/docs/DOC-4212
    Beginner? Try LabVIEW Basics
    Sharing bits of code? Try Snippets or LAVA Code Capture Tool
    Have you tried Quick Drop?, Visit QD Community.

  • Uploading to online data storage: Now cannot download songs into iTunes

    Hello all.
    I am not seeing this one on any FAQs, knowledge bases, or discussion boards yet:
    I am doing my initial upload of files to SugarSync, a highly-recommended online data storage service. Since I started, I cannot download songs into iTunes, even from the iTunes Store.
    On any new downloads from the iTunes Store, the song DOES appear in the Music library view but immediately after completing the download it shows the exclamation point warning that the file is not in its location. The new folders get created properly within the iTunes Music folder but the subfolder where the song should be is empty. Each time I had to write iTunes Support to get them to make the files available.
    I can still add files manually, no problem. E.g., I can add *.mp3 or *.wav files or folders, I can convert them to AAC, etc. The glitch seems to occur only when automated loads into iTunes are operating.
    As a test, I downloaded a song from Amazon. Here the problem was different but I could work around it manually. Normally the Amazon process loads songs automatically into iTunes, too. Here again, the download did create the proper folder in the iTunes Music folder, as it should, but this time the symptoms were reversed:
    (a) the mp3 file WAS in its folder as it should be (w/the iTunes DL, the file was NOT there)
    (b) the song did NOT appear in the iTunes Music view (w/the iTunes DL, the song DID appear)
    (c) I was able to browse to the file and tell it manually to load into iTunes (w/iTunes I had to write Support and wait a day).
    (I wonder what's the cause of the differences between the two cases.)
    Strictly speaking, I can't PROVE the problem has anything to do with SugarSync (which otherwise seems good so far), but the DL problem started as soon as I started using it. Something in the SugarSync upload or file-monitoring process, or an odd thin gin iTunes, seems to be preventing automated, direct loads into iTunes. And since the data service runs in background, so it can monitor file changes, that might mean I can't buy music anymore! Obviously that would be a dealbreaker with SS. (I have contacted SS on this but they've not had fair chance to reply yet.)
    1) Anyone else have this problem?
    2) Is this permanent or just temporary while I am doing the initial upload?
    3) Anyone know a solution?
    (FYI, I am a highly-experienced user and otherwise quite handy with iTunes files, library moves, and backups. My library is entirely consolidated and all AAC.)
    Thanks.
    (Oh, and this occurred in both iTunes 8 and the new iTunes 9, so it seems unrelated to the upgrade this week.)

    UPDATE 1. CHANGING BROWSER HELPED -- OR DID IT?
    I called Apple iTunes Support, who said the problem is new to them. The technician's hypothesis was that something, perhaps browser-related, was interfering with the initial creation of a temporary file (which should go to the Recycle Bin) that instead causes the completed file to go to there.
    He noted that iTunes, though not going through one's browser onscreen, does use settings within one's default browser. I use Mozilla Firefox, so we switched to IE as the default browser, restarted iTunes, and the song downloaded with no problem! Then I switched back to Mozilla, restarted iTunes, and it worked AGAIN with no problem!
    (Dutifully I advised SugarSync, which is still investigating.)
    UPDATE 2: ARTIST NAME CHANGE - SOME FILES GOT MIS-MOVED / MIS-CHANGED
    Definitely something still wrong. This time some pre-existing song entries (not new downloads) lost their connection to their source file.
    In iTunes, which manages folder names for artists and albums automatically, I corrected the spelling for an Artist, so immediately iTunes renamed the folder, and automatically SugarSync noted the change to be uploaded. While the changed folder name and all the songs within were still uploading to SS, in iTunes I saw exclamation points come up -- but only for some of them. Most files got moved or changed correctly, but several lost connection to their file (i.e., the file was removed for the original misnamed folder but never moved into the correctly-named folder). Weird.
    Worse, in only some of those cases did I find the missing *.m4a file in the Recycle bin. (I had to retrieve old, original *.mp3 versions from another folder and re-import each into iTunes manually.) I've never seen iTunes have a problem managing an Artist rename until I started using the live SS process.
    (I've reported this to SS and asked if there is a way to disable temporarily SS to see if that's the problem.)
    [Note: I am willing to try downloads again but I am wary of trying to rename entire Artists (Folder) again. That was a lot of work.]
    ====
    UPDATE 3: SERIES OF TESTS - 1 FAILURE USING iTUNES
    Still problem occurs, but not always. Today, I rebooted PC. I tried CD, iTunes, & Amazon. I varied having the browser open when using iTunes.
    Here are the results of a series of attempts to download songs. "FAIL" means the file did not load properly into iTunes or loaded but lost its connection (exclamation point warning).
    # Source Mozilla #Songs Result
    1 CD Closed 1 OK
    2 iTunes Closed 1 OK
    3 Amazon Open 2 OK
    4 iTunes Open 1 FAIL
    5 Amazon Open 1 OK
    6 iTunes Closed 1 OK
    7 Amazon Open 2 OK
    8 iTunes Open 2 OK
    (I reported this to SS. Hoping they'll test and find the problem.)

Maybe you are looking for