Loading 7831R memory with contents of Array from Host VI?

Hi,
Can anyone help - how can I load the contents of an array in the
labview Host VI into the memory of the FPGA on the 7831 board? When I
try to do this in a FOR loop, and index each element, it says that the
array must be a fixed size for this target. How do I do this? I
initialise the array in an earlier sequence frame, and am reading from
a local variable.
Thanks for any advice,
Joe.

In LabVIEW FPGA all arrays must have a fixed size, because they are allocated as static memory on the FPGA. As much as possible you should avoid allocating (large) arrays in your LabVIEW FPGA diagram since this will use a lot of FPGA space; about 1% of the FPGA for every 4 bytes of array data.
Instead of using an array in your diagram you can use the Memory Read and Write functions which provide access to 16kB of memory which is separate from the FPGA.
If you want to use an array as you described, you should create an array constant in your diagram or an array control on your front panel. To set the size, right click on the array index display and select 'Set Dimension Size'. Select the fixed size and specify the number of elements in the array.
T
o pass values to an array on the diagram use the 'Array Replace Subset' function on the existing array.
If you have an array control on your front panel you can directly pass the whole array from your host application to the FPGA VI and do not need to pass individual array elements.
Christian L
NI Consulting Services
Christian Loew, CLA
Principal Systems Engineer, National Instruments
Please tip your answer providers with kudos.
Any attached Code is provided As Is. It has not been tested or validated as a product, for use in a deployed application or system,
or for use in hazardous environments. You assume all risks for use of the Code and use of the Code is subject
to the Sample Code License Terms which can be found at: http://ni.com/samplecodelicense

Similar Messages

  • Failed to load XML file with Content ID 'XYZ'

    Hello,
    We are using UCM Version:11.1.1.8.1DEV-2014-01-06 04:18:30Z-r114490 (Build:7.3.5.185) with site studio for creating templates and web sites.
    While switching to contribution mode, we find 'Failed to load XML file with Content ID 'XYZ' error.[Here XYZ is the local checkin content]
    In region we are using dynamic converter to convert the style of native document here below are region and its element details.
    <region  id="region3" name="Add_Content_Here" flags="1111111100100" metadata="xIdcProfile%3AisHidden%3Dtrue%26xTemplateType%3AisHidden%3Dtrue%26xShowInStaff%3AisHidden%3Dtrue%26xShowInVisitors%3AisHidden%3Dtrue%26xShowInFaculty%3AisHidden%3Dtrue%26xDiscussionCount%3AisHidden%3Dtrue%26xDiscussionType%3AisHidden%3Dtrue" dccommand="ssIncDynamicConversionByRule(SS_DATAFILE, 'Colleges_Template_Rule')">
          <!--$region3_ACTIONS="EIMPRS",region3_DCCOMMAND="ssIncDynamicConversionByRule(SS_DATAFILE, 'Colleges_Template_Rule')" -->
          <element  id="region3_element1" name="Editor" label="Editor" type="1" flags="111111111111111111111100000111100000000000001111001110111010001111101000000000000000000000000000">
            <!--$region3_element1="Add_Content_Here/Editor" -->
            <linktoregioncontent  createnewxml="true" createnewnative="false" choosemanaged="true" chooselocal="false" choosenone="false">
              <choosemanagedquerytext  corecontentonly="FALSE">
                <![CDATA[xWebsiteObjectType <Matches> `Data File` <OR> xWebsiteObjectType <Matches> `Native Document`]]>
              </choosemanagedquerytext>
            </linktoregioncontent>
          </element>
          <switchregioncontent  createnewxml="true" createnewnative="true" choosemanaged="true" chooselocal="false" choosenone="false">
            <createnewnativedoctypes >
              <![CDATA[.doc,.docx,.txt,.rtf]]>
            </createnewnativedoctypes>
            <choosemanagedquerytext  corecontentonly="FALSE">
              <![CDATA[xWebsiteObjectType <Matches> `Data File` <OR> xWebsiteObjectType <Matches> `Native Document`]]>
            </choosemanagedquerytext>
            <defaultmetadata >
              <![CDATA[xIdcProfile%3AisHidden%3Dtrue%26xTemplateType%3AisHidden%3Dtrue%26xCollegesList%3AisHidden%3Dtrue%26xShowInStudents%3AisHidden%3Dtrue%26xShowInStaff%3AisHidden%3Dtrue%26xShowInVisitors%3AisHidden%3Dtrue%26xShowInFaculty%3AisHidden%3Dtrue%26xArticleSection%3AisHidden%3Dtrue%26xDiscussionCount%3AisHidden%3Dtrue%26xDiscussionType%3AisHidden%3Dtrue%26dpTriggerValue%3DCSE]]>
            </defaultmetadata>
          </switchregioncontent>
        </region>
    <!--SS_BEGIN_OPENREGIONMARKER(region3)--><!--$SS_REGIONID="region3"--><!--$include ss_open_region_definition --><!--SS_END_OPENREGIONMARKER(region3)-->
    <!--SS_BEGIN_ELEMENT(region3_element1)--><!--$ssIncludeXml(SS_DATAFILE,region3_element1 & "/node()")--><!--SS_END_ELEMENT(region3_element1)-->
    <!--SS_BEGIN_CLOSEREGIONMARKER(region3)--><!--$include ss_close_region_definition --><!--SS_END_CLOSEREGIONMARKER(region3)-->
    Regardrs,
    Syed

    Hi Syed ,
    Add the following trace sections :
    requestaudit,sitestudio*,system + Full verbose tracing
    Clear the server output .
    Replicate the same steps and once error shows up , refresh server output and copy the logs to a text file and upload here .
    Thanks,
    Srinath

  • Issue with building an array from a cfhttp request result.

    Here is what I am trying to do. Retrieve a bunch of results from a  REST request. Run a query to see if I should be excluding any of the xmltext entries coming back from the rest request. Build an array of the REST xmltext entries except the entries in the cfquery.
    I have it all workign except building the array minus the entries that came back in the cfquery. Here is my code so far.
    <cfquery name="getqueue" datasource="#application.settings.dsn#">
    select * from friends
    where Deactivatedate < #DATEADD('d', 1, CreateODBCDateTime(now()))#
    </cfquery>
    <cfoutput query = "getqueue">
    <cfhttp  blah blah>
    <cfset nodes_parse = XmlParse(CFHTTP.FileContent)>
    <cfset Nodes = xmlSearch(nodes_parse,'friends/friend/date/activedate/')>
    <cfset roleArray = ArrayNew(1)>
    <cfloop from="1" to="#arraylen(Nodes)#" index="i">
       <cfset NodeXML = xmlparse(Nodes[i])>
    <cfset ArrayAppend(roleArray, '[sel_members][]=#NodeXML.activedate.xmlText#&')>
    </cfloop>
    </cfoutput>
    My issue is down in the loop where I do the arrayappend. How would I build an array of values coming back from the cfhttp request but not include any of them if they match up with anything coming back from the getqueue query?

    What about the obvious? Namely,
    <cfif value_from_getqueue IS NOT value_from_cfhttp_request>
    <cfset arrayAppend()>
    </cfif>

  • Creating document with content from a remote system

    Any sample code that I have seen for creating a new document requires that the file (the content) of the document resides on the Server machine.
    What is the best way to create a document with content that comes from a remote file?

    The Java code which is trying to load a file into IFS must have access to the file to load it.
    If the file you are trying to load is in a remote location from the Java code, then you would need to use some remote access protocol, such as NFS, to ensure the file was accessible to the Java code.

  • Can I move Array from one Xserve RAID to another and keep the data

    I'd like to move a set of disk with an existing Array from one Xserve RAID to another. Can I simply shutdown both Xserve RAIDs and move the disk over, assuming I put the disk back in the same order?

    Yes, you should be able to do this.
    BE SURE YOU HAVE A BACKUP FIRST.

  • Loading child SWFs with TLF content (from Flash CS5.5) generates reference errors in FB4.6

    I am currently producing e-learning content with our custom AS3-Framework.
    We normally create content files in Flash CS5.5 with dynamic text fields, which are set at runtime from our Framework (AS3 framework in FB4.6).
    Now we are in the progress of language versioning, and since one of the languages is Arabic we are changing the dynamic text fields to TLF fields.
    Then all my problems started.
    In Flash I have chosen to include the TLF engine and to export in frame 2.
    (see to: http://helpx.adobe.com/flash/kb/loading-child-swfs-tlf-content.html )
    I get this error:
    VerifyError: Error #1053: Illegal override of getEventMirror in flashx.textLayout.elements.FlowLeafElement.
                at flash.display::MovieClip/gotoAndPlay()
    I guess it is because our framework wants to gotoAndPlay before the TLF engine has been loaded.
    Can this be it?
    Any good suggestions on how to handle loading child SWFs with TLF content.

    Please refere to the following articles .. you may find some help.
    http://www.stevensacks.net/2010/05/28/flash-cs5-tlf-engine-causes-errors-with-loaded-swfs/
    http://www.adobe.com/devnet/flash/articles/preloading-tlf-rsl.html
    I also faced similar problem and posted a question at the following link, but, I still couldn't find the solution.
    http://forums.adobe.com/message/4367968#4367968

  • Since upgrading to Yosemite on my iMac, My Mail app refuses to open, Safari won't load, and I can't upload pictures from my iPad... Please help, any ideas very welcome. I've had my iMac since 2011, and have never had any issues to deal with before..

    Since upgrading to Yosemite on my iMac: My Mail app refuses to open, Safari won't load, and I can't upload pictures from my iPad...
    Please help, any ideas very welcome. I've had my iMac since 2011, and have never had any issues to deal with before..
    Thanks

    29/11/2014 20:17:01.315 com.apple.xpc.launchd[1]: (jp.buffalo.NASPower)
    Service only ran for 11 seconds. Pushing respawn out by 49 seconds.
    29/11/2014 20:17:37.013 com.apple.backupd[616]: Finished scan
    29/11/2014 20:17:43.108 com.apple.backupd[616]: Saved event cache at
    /Volumes/Time Machine Backups/Backups.backupdb/Geoff Lambrechts’s iMac
    (2)/2014-11-29-200648.inProgress/9B453663-603F-40B8-AC21-24F05C724E15/.6162AD34- 38F8-30AB-98E0-4A22FB9D311F.eventdb
    29/11/2014 20:17:43.207 com.apple.backupd[616]: Not using file event
    preflight for Macintosh HD
    29/11/2014 20:18:01.561 com.apple.xpc.launchd[1]: (jp.buffalo.NASPower)
    Service only ran for 11 seconds. Pushing respawn out by 49 seconds.
    29/11/2014 20:18:16.288 com.apple.xpc.launchd[1]:
    (com.apple.quicklook[715]) Endpoint has been activated through legacy
    launch(3) APIs. Please switch to XPC or bootstrap_check_in():
    com.apple.quicklook
    29/11/2014 20:18:23.705 com.apple.SecurityServer[56]: Session 100013 created
    29/11/2014 20:18:32.046 mdworker[718]: code validation failed in the
    process of getting signing information: Error Domain=NSOSStatusErrorDomain
    Code=-67062 "The operation couldn’t be completed. (OSStatus error -67062.)"
    29/11/2014 20:19:01.662 com.apple.xpc.launchd[1]: (jp.buffalo.NASPower)
    Service only ran for 11 seconds. Pushing respawn out by 49 seconds.
    29/11/2014 20:19:45.458 com.apple.xpc.launchd[1]:
    (com.apple.imfoundation.IMRemoteURLConnectionAgent) The
    _DirtyJetsamMemoryLimit key is not available on this platform.
    29/11/2014 20:19:59.123 com.apple.xpc.launchd[1]:
    (com.apple.PubSub.Agent[725]) Endpoint has been activated through legacy
    launch(3) APIs. Please switch to XPC or bootstrap_check_in():
    com.apple.pubsub.ipc
    29/11/2014 20:19:59.123 com.apple.xpc.launchd[1]:
    (com.apple.PubSub.Agent[725]) Endpoint has been activated through legacy
    launch(3) APIs. Please switch to XPC or bootstrap_check_in():
    com.apple.pubsub.notification
    29/11/2014 20:20:01.138 com.apple.xpc.launchd[1]: (jp.buffalo.NASPower)
    Service only ran for 10 seconds. Pushing respawn out by 50 seconds.
    29/11/2014 20:21:01.484 com.apple.xpc.launchd[1]: (jp.buffalo.NASPower)
    Service only ran for 10 seconds. Pushing respawn out by 50 seconds.
    29/11/2014 20:21:13.430 com.apple.backupd[616]: Found 4529 files (1.1 GB)
    needing backup
    29/11/2014 20:21:18.786 com.apple.backupd[616]: 2.82 GB required (including
    padding), 1.24 TB available
    29/11/2014 20:21:31.775 Console[734]: Failed to connect (_consoleX) outlet
    from (NSApplication) to (ConsoleX): missing setter or instance variable
    29/11/2014 20:21:34.230 WindowServer[162]: disable_update_timeout: UI
    updates were forcibly disabled by application "Console" for over 1.00
    seconds. Server has re-enabled them.
    29/11/2014 20:21:36.898 WindowServer[162]: common_reenable_update: UI
    updates were finally reenabled by application "Console" after 3.67 seconds
    (server forcibly re-enabled them after 1.00 seconds)
    29/11/2014 20:21:36.971 coreservicesd[83]: SFLEntryBase::ListHasChanged
    mach_msg returned 10000004d
    29/11/2014 20:22:01.817 com.apple.xpc.launchd[1]: (jp.buffalo.NASPower)
    Service only ran for 10 seconds. Pushing respawn out by 50 seconds.
    29/11/2014 20:23:02.170 com.apple.xpc.launchd[1]: (jp.buffalo.NASPower)
    Service only ran for 10 seconds. Pushing respawn out by 50 seconds.
    29/11/2014 20:24:02.547 com.apple.xpc.launchd[1]: (jp.buffalo.NASPower)
    Service only ran for 10 seconds. Pushing respawn out by 50 seconds.
    29/11/2014 20:25:02.168 CalendarAgent[218]:
    [Refusing to parse response
    to PROPPATCH because of content-type: .]
    29/11/2014 20:25:02.233 CalendarAgent[218]:
    [Refusing to parse response
    to PROPPATCH because of content-type: .]
    29/11/2014 20:25:02.236 CalendarAgent[218]:
    [Refusing to parse response
    to PROPPATCH because of content-type: .]
    29/11/2014 20:25:02.284 CalendarAgent[218]:
    [Refusing to parse response
    to PROPPATCH because of content-type: .]
    29/11/2014 20:25:03.059 com.apple.xpc.launchd[1]: (jp.buffalo.NASPower)
    Service only ran for 10 seconds. Pushing respawn out by 50 seconds.
    29/11/2014 20:25:12.674 com.apple.xpc.launchd[1]:
    (com.apple.imfoundation.IMRemoteURLConnectionAgent) The
    _DirtyJetsamMemoryLimit key is not available on this platform.
    29/11/2014 20:26:03.464 com.apple.xpc.launchd[1]: (jp.buffalo.NASPower)
    Service only ran for 10 seconds. Pushing respawn out by 50 seconds.
    29/11/2014 20:27:03.841 com.apple.xpc.launchd[1]: (jp.buffalo.NASPower)
    Service only ran for 10 seconds. Pushing respawn out by 50 seconds.
    On 29 November 2014 at 19:29, Apple Support Communities Updates <

  • Pre-loading Oracle text in memory with Oracle 12c

    There is a white paper from Roger Ford that explains how to load the Oracle index in memory : http://www.oracle.com/technetwork/database/enterprise-edition/mem-load-082296.html
    In our application, Oracle 12c, we are indexing a big XML field (which is stored as XMLType with storage secure file) with the PATH_SECTION_GROUP. If I don't load the I table (DR$..$I) into memory using the technique explained in the white paper then I cannot have decent performance (and especially not predictable performance, it looks like if the blocks from the TOKEN_INFO columns are not memory then performance can fall sharply)
    But after migrating to oracle 12c, I got a different problem, which I can reproduce: when I create the index it is relatively small (as seen with ctx_report.index_size) and by applying the technique from the whitepaper, I can pin the DR$ I table into memory. But as soon as I do a ctx_ddl.optimize_index('Index','REBUILD') the size becomes much bigger and I can't pin the index in memory. Not sure if it is bug or not.
    What I found as work-around is to build the index with the following storage options:
    ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'YES' );
    ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    so that the token_info column will be stored in a secure file. Then I can change the storage of that column to put it in the keep buffer cache, and write a procedure to read the LOB so that it will be loaded in the keep cache. The size of the LOB column is more or less the same as when creating the index without the BIG_IO option but it remains constant even after a ctx_dll.optimize_index. The procedure to read the LOB and to load it into the cache is very similar to the loaddollarR procedure from the white paper.
    Because of the SDATA section, there is a new DR table (S table) and an IOT on top of it. This is not documented in the white paper (the white paper was written for Oracle 10g). In my case this DR$ S table is much used, and the IOT also, but putting it in the keep cache is not as important as the token_info column of the DR I table. A final note: doing SEPARATE_OFFSETS = 'YES' was very bad in my case, the combined size of the two columns is much bigger than having only the TOKEN_INFO column and both columns are read.
    Here is an example on how to reproduce the problem with the size increasing when doing ctx_optimize
    1. create the table
    drop table test;
    CREATE TABLE test
    (ID NUMBER(9,0) NOT NULL ENABLE,
    XML_DATA XMLTYPE
    XMLTYPE COLUMN XML_DATA STORE AS SECUREFILE BINARY XML (tablespace users disable storage in row);
    2. insert a few records
    insert into test values(1,'<Book><TITLE>Tale of Two Cities</TITLE>It was the best of times.<Author NAME="Charles Dickens"> Born in England in the town, Stratford_Upon_Avon </Author></Book>');
    insert into test values(2,'<BOOK><TITLE>The House of Mirth</TITLE>Written in 1905<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    insert into test values(3,'<BOOK><TITLE>Age of innocence</TITLE>She got a prize for it.<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    3. create the text index
    drop index i_test;
      exec ctx_ddl.create_section_group('TEST_SGP','PATH_SECTION_GROUP');
    begin
      CTX_DDL.ADD_SDATA_SECTION(group_name => 'TEST_SGP', 
                                section_name => 'SData_02',
                                tag => 'SData_02',
                                datatype => 'varchar2');
    end;
    exec ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    exec  ctx_ddl.set_attribute('TEST_STO','I_TABLE_CLAUSE','tablespace USERS storage (initial 64K)');
    exec  ctx_ddl.set_attribute('TEST_STO','I_INDEX_CLAUSE','tablespace USERS storage (initial 64K) compress 2');
    exec  ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'NO' );
    exec  ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    create index I_TEST
      on TEST (XML_DATA)
      indextype is ctxsys.context
      parameters('
        section group   "TEST_SGP"
        storage         "TEST_STO"
      ') parallel 2;
    4. check the index size
    select ctx_report.index_size('I_TEST') from dual;
    it says :
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                                104
    TOTAL BLOCKS USED:                                                      72
    TOTAL BYTES ALLOCATED:                                 851,968 (832.00 KB)
    TOTAL BYTES USED:                                      589,824 (576.00 KB)
    4. optimize the index
    exec ctx_ddl.optimize_index('I_TEST','REBUILD');
    and now recompute the size, it says
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                               1112
    TOTAL BLOCKS USED:                                                    1080
    TOTAL BYTES ALLOCATED:                                 9,109,504 (8.69 MB)
    TOTAL BYTES USED:                                      8,847,360 (8.44 MB)
    which shows that it went from 576KB to 8.44MB. With a big index the difference is not so big, but still from 14G to 19G.
    5. Workaround: use the BIG_IO option, so that the token_info column of the DR$ I table will be stored in a secure file and the size will stay relatively small. Then you can load this column in the cache using a procedure similar to
    alter table DR$I_TEST$I storage (buffer_pool keep);
    alter table dr$i_test$i modify lob(token_info) (cache storage (buffer_pool keep));
    rem: now we must read the lob so that it will be loaded in the keep buffer pool, use the prccedure below
    create or replace procedure loadTokenInfo is
      type c_type is ref cursor;
      c2 c_type;
      s varchar2(2000);
      b blob;
      buff varchar2(100);
      siz number;
      off number;
      cntr number;
    begin
        s := 'select token_info from  DR$i_test$I';
        open c2 for s;
        loop
           fetch c2 into b;
           exit when c2%notfound;
           siz := 10;
           off := 1;
           cntr := 0;
           if dbms_lob.getlength(b) > 0 then
             begin
               loop
                 dbms_lob.read(b, siz, off, buff);
                 cntr := cntr + 1;
                 off := off + 4096;
               end loop;
             exception when no_data_found then
               if cntr > 0 then
                 dbms_output.put_line('4K chunks fetched: '||cntr);
               end if;
             end;
           end if;
        end loop;
    end;
    Rgds, Pierre

    I have been working a lot on that issue recently, I can give some more info.
    First I totally agree with you, I don't like to use the keep_pool and I would love to avoid it. On the other hand, we have a specific use case : 90% of the activity in the DB is done by queuing and dbms_scheduler jobs where response time does not matter. All those processes are probably filling the buffer cache. We have a customer facing application that uses the text index to search the database : performance is critical for them.
    What kind of performance do you have with your application ?
    In my case, I have learned the hard way that having the index in memory (the DR$I table in fact) is the key : if it is not, then performance is poor. I find it reasonable to pin the DR$I table in memory and if you look at competitors this is what they do. With MongoDB they explicitly says that the index must be in memory. With elasticsearch, they use JVM's that are also in memory. And effectively, if you look at the awr report, you will see that Oracle is continuously accessing the DR$I table, there is a SQL similar to
    SELECT /*+ DYNAMIC_SAMPLING(0) INDEX(i) */    
    TOKEN_FIRST, TOKEN_LAST, TOKEN_COUNT, ROWID    
    FROM DR$idxname$I
    WHERE TOKEN_TEXT = :word AND TOKEN_TYPE = :wtype    
    ORDER BY TOKEN_TEXT,  TOKEN_TYPE,  TOKEN_FIRST
    which is continuously done.
    I think that the algorithm used by Oracle to keep blocks in cache is too complex. A just realized that in 12.1.0.2 (was released last week) there is finally a "killer" functionality, the in-memory parameters, with which you can pin tables or columns in memory with compression, etc. this looks ideal for the text index, I hope that R. Ford will finally update his white paper :-)
    But my other problem was that the optimize_index in REBUILD mode caused the DR$I table to double in size : it seems crazy that this was closed as not a bug but it was and I can't do anything about it. It is a bug in my opinion, because the create index command and "alter index rebuild" command both result in a much smaller index, so why would the guys that developped the optimize function (is it another team, using another algorithm ?) make the index two times bigger ?
    And for that the track I have been following is to put the index in a 16K tablespace : in this case the space used by the index remains more or less flat (increases but much more reasonably). The difficulty here is to pin the index in memory because the trick of R. Ford was not working anymore.
    What worked:
    first set the keep_pool to zero and set the db_16k_cache_size to instead. Then change the storage preference to make sure that everything you want to cache (mostly the DR$I) table come in the tablespace with the non-standard block size of 16k.
    Then comes the tricky part : the pre-loading of the data in the buffer cache. The problem is that with Oracle 12c, Oracle will use direct_path_read for FTS which basically means that it bypasses the cache and read directory from file to the PGA !!! There is an event to avoid that, I was lucky to find it on a blog (I can't remember which, sorry for the credit).
    I ended-up doing that. the events to 10949 is to avoid the direct path reads issue.
    alter session set events '10949 trace name context forever, level 1';
    alter table DR#idxname0001$I cache;
    alter table DR#idxname0002$I cache;
    alter table DR#idxname0003$I cache;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0001$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0002$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0003$I;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0001$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0002$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0003$I ITAB;
    It worked. With a big relief I expected to take some time out, but there was a last surprise. The command
    exec ctx_ddl.optimize_index(idx_name=>'idxname',part_name=>'partname',optlevel=>'REBUILD');
    gqve the following
    ERROR at line 1:
    ORA-20000: Oracle Text error:
    DRG-50857: oracle error in drftoptrebxch
    ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION
    ORA-06512: at "CTXSYS.DRUE", line 160
    ORA-06512: at "CTXSYS.CTX_DDL", line 1141
    ORA-06512: at line 1
    Which is very much exactly described in a metalink note 1645634.1 but in the case of a non-partitioned index. The work-around given seemed very logical but it did not work in the case of a partitioned index. After experimenting, I found out that the bug occurs when the partitioned index is created with  dbms_pclxutil.build_part_index procedure (this enables  enables intra-partition parallelism in the index creation process). This is a very annoying and stupid bug, maybe there is a work-around, but did not find it on metalink
    Other points of attention with the text index creation (stuff that surprised me at first !) ;
    - if you use the dbms_pclxutil package, then the ctx_output logging does not work, because the index is created immediately and then populated in the background via dbms_jobs.
    - this in combination with the fact that if you are on a RAC, you won't see any activity on the box can be very frightening : this is because oracle can choose to start the workers on the other node.
    I understand much better how the text indexing works, I think it is a great technology which can scale via partitioning. But like always the design of the application is crucial, most of our problems come from the fact that we did not choose the right sectioning (we choosed PATH_SECTION_GROUP while XML_SECTION_GROUP is so much better IMO). Maybe later I can convince the dev to change the sectionining, especially because SDATA and MDATA section are not supported with PATCH_SECTION_GROUP (although it seems to work, even though we had one occurence of a bad result linked to the existence of SDATA in the index definition). Also the whole problematic of mixed structured/unstructured searches is completly tackled if one use XML_SECTION_GROUP with MDATA/SDATA (but of course the app was written for Oracle 10...)
    Regards, Pierre

  • I purchased two albums from iTunes but neither properly loaded and I cannot play any of the songs.  I have prevously loaded other albums with no problems.  How do I delete these albums and reload them?  I don't see iTunes on iCloud.

    I purchased two albums from iTunes but neither properly loaded on my iPad Mini and I cannot play any of the songs.  I have prevously loaded other albums with no problems.  How do I delete these albums and reload them?  I don't see iTunes on iCloud.

    If the albums do appear in the Music app, hold down on the album icons and the X to delete will pop up on the icons. Tap the X to delete the albums.
    You don't see a anything in iCloud, you see your previous purchases in the purchased tab of the store that you bought the content in. So you find your music purchases in the purchased tab of the iTunes app on your iPad. Tap on Purchased and look for the albums in the Not on this iPad section - assuming that you are able to delete the albums from the device.

  • I can't trasfer music from my library in itunes to my iphone 3GS. A black ring with a line through it comes up when i try to move music onto my iphone from itunes. How can I fix this so that I can load my phone with music?

    I can't trasfer music from my library in itunes to my iphone 3GS. A black ring with a line through it comes up when i try to move music onto my iphone from itunes. How can I fix this so that I can load my phone with music?

    How are you trying to transfer it and what are your setting on iTunes.  I'm not on a computer that has iTunes at the moment so I'm going from memory and similar issue.  If iTunes is set to sync music then I don't think you can select and drag, if set to manual manage music then you can.  Also make sure you are dragging to music on the iPhone.
    Hope this helps

  • Loading an array from sql query

    I am trying to load an array from a sql query. I need the data to be numeric because I need to format the data using NumberFormat. Here is the query:
              int rd=0;
         Double[] rd_elapsedtime;
         ArrayList rd_elapsedtime_ar = new ArrayList(rd);
         ResultSet report1d_myResultSet = stmt.executeQuery(report1d);
    if (report1d_myResultSet != null) {
    while (report1d_myResultSet.next()) {
                   rd=rd+1;
                   rd_elapsedtime = new Double[rd+1];
                   rd_elapsedtime[rd]=report1d_myResultSet.getDouble("elapsedtime");
                   rd_elapsedtime_ar.add(rd_elapsedtime[rd]);
    This query fails, but if I change the type to string it does work. elapsedtime is a number column in the database.
    [javac] /usr/local/tomcat/work/Standalone/localhost/new_test/test1_jsp.java:358: incompatible types
    [javac] found : double
    [javac] required: java.lang.Double
    [javac]           rd_elapsedtime[rd]=report1d_myResultSet.getDouble("elapsedtime");
    Also, can I take the string and convert to a double, format that and then load the array?
    Thank you for your help.

    getDouble returns a double and you need a Double.
    This will do:
    rd_elapsedtime[rd]= new
    Double(report1d_myResultSet.getDouble("elapsedtime"));I tried the suggestion and it failed with a java.lang.ClassCastException error. I also tried changing the type to double and got this error:
    [javac] symbol : method add (double)
    [javac] location: class java.util.ArrayList
    [javac]           rd_elapsedtime_ar.add(rd_elapsedtime[rd]);
    [javac] ^
    [javac] 1 error
    Thanks for your help.

  • Speeding Up Load Array From Excel subVI

    Hi there,
    I've got a subVI that loads a 2D double array from an Excel file. Currently, it needs about 500ms between runs to not cause a variant error, something to do with getting everything opened and closed. I was wondering if anybody could see anything obvious that's slowing it down. If you could help me optimise this a little that would be great. Even if I could shave it down to 200ms.
    Solved!
    Go to Solution.

    Here is a quick example of what I'm talking about.  Placing the App reference open/close outside the loop gets you down to around 300 ms.
    "There is a God shaped vacuum in the heart of every man which cannot be filled by any created thing, but only by God, the Creator, made known through Jesus." - Blaise Pascal
    Attachments:
    Excel Read 2DArray_loop.vi ‏28 KB

  • MIXING ORIGINAL APPLE MEMORY WITH MEMORY FROM ANOTHER MANUFACTURER

    Hi everyone,
    I'm about to hop on the mac train and buy a macbook 2.16GHz( the recent previous black model)
    It's coming with 1GB of memory and I wanted to buy 2GB to add to it. I found a good deal for a OWC 2GB DDR2 PC2-5300 667MHz DIMM(it's one stick of 2GB).
    Is it compatible to mix different memory makes and if so, is it also optimal? Would the optimizing power be the same if I used all the memory from one manufacturer or it makes no difference? Thanks.

    bdkjones wrote:
    As long as the memory is by a major manufacturer (Samsung, Hynix, Micron, etc) it should not matter.
    You do get a slight advantage by using 2 of the same capacity RAM modules. (i.e. 2x1GB chips or 2x2GB chips). In practice, however, this performance gain really isn't noticeable. You have to run a memory benchmark test to see the difference.
    I remember there was this supposed IT guy who claimed that different brands of memory "fight it out". Some of the answers were absolutely hilarious. Seriously, there's nothing that consistent about a single brand of memory. There are always variations, but I wouldn't worry about a thing provided the memory actually meets its published specs. You get variation quality with memory modules that aren't fully-branded by one of the major memory IC manufacturers, but there are some reputable brands out there (OCZ, PNY, Patriot, etc).
    I have an iBook G4 1.42. For two+ years I've been running the permanent Samsung memory with a 1 GB Micron PC2700 module. I've never encountered anything that suggests that there is a problem with it.
    I've also read the Intel specs on what the chipset does with memory sized identically. Apparently it can do that very same thing (memory interleaving) even if the sizes aren't matched. However - it'll only do that for up to the size of the smaller module, and the remainder of the larger module isn't interleaved.

  • Firefox becomes really slow then eventually unresponsive when loading a page with many hires images. Unsual high memory usage up to 2gigs just for firefox. Was never a problem with v3.6.

    When loading a page with many hires images, Firefox becomes really slow and scrolling becomes jumpy then eventually becomes completely unresponsive. Unusual high memory usage of up to 2gigs just for firefox when loading these pages. This was never a problem with v3.6.

    I encountered the same type of problem. Firefox running terribly slowly and slowing down my entire machine (Core i5 with 256GB SSD). Searching the forums, I found a couple of things about troubleshooting performance issues, one of which was to use '''hardware acceleration''', that is on by default. It was turned on on my PC, '''so I tried deactivating it, and it worked!'''
    So doing the exact opposite as Mozilla support said solved the problem. It is really a pain now to work with Firefox. I'm using it because I have no choice, but I'd recommend IE and Chrome over Firefox... Whatever, the market will decide once Firefox has become to crappy...

  • Loading huge file with Sql-Loader from Java

    Hi,
    I have a csv file with aprox. 3 and a half million records.
    I load this data with sqlldr from within java like this:
               String command = "sqlldr userid=" + user + "/" + pass
                        + "@" + service + " control='" + ctlFile + "'";
                System.out.println(command);
                if (System.getProperty("os.name").contains("Windows")) {
                    p = Runtime.getRuntime().exec("cmd /C " + command);
                } else {
                    p = Runtime.getRuntime().exec("sh -c " + command);
                }it does what I want to, load the data to a certain table, BUT it takes too much time, Is there a faster way to load data to an oracle db from within java?
    Thanks, any advice is very welcome

    Have your DBA work on this issue - they can monitor and check performance of SQL*Loader
    SQL*Loader performance tips          [Document 28631.1]
    SQL*LOADER SLOW PERFORMANCE          [Document 1026145.6]
    Master Note for SQL*Loader          [Document 1264730.1]
    HTH
    Srini

Maybe you are looking for