Issue with VO iteration

Hi,
I have a auto customization search page. I have used a VO in table region.
after search it returns 3 or 5 rows. but in controler i am getting more than 2500 rows. not able to iterate the vo.
the code is below.
if (pageContext.getParameter("saveBG") != null)
        System.out.println("inside save :");
       OAApplicationModule am = pageContext.getApplicationModule(webBean);
       OAViewObject vo = (OAViewObject)am.findViewObject("FTLItemSummaryVO");
        pageContext.writeDiagnostics(this,"inside submitt " ,4);
       Row row;
       int qty = 0;
       String v_item_no = new String();
       String v_uom = new String();
       //long totalRows = vo.getEstimatedRowCount();
       //int currentRow = vo.getCurrentRowIndex();
        vo.setRangeSize(-1);
       Row[] allRows = vo.getAllRowsInRange();
       // pageContext.writeDiagnostics(this,"inside submitt totalRows"+totalRows ,4);
       // pageContext.writeDiagnostics(this,"inside submitt currentRow"+currentRow ,4);
       int lngth=allRows.length;
        System.out.println("inside save lngth:"+lngth);
IT prints 2700 rows.
if i dont use setRangeSize(-1), I am getting only one row.
Not able to get the correct number of roes as showing into the table.
Thanks,
Niladri

Hi,
Thanks for replying.
The query which is there in VO is returning more than 2500 rows. But by clicking go in autocustomaizatio page.
Vo returns 3 rows which i can see into the result table. but when clicking submit button, in controler I want to get only those 3 rows which are there in my page.
But I am getting all the 2500 rows. I have used vo.getQuery() in submit loop in controler which returns the original VO query,
But My question is after search VO query should have been changed. I should get the modified filter query.
I have done VO.next() also which also looping for 2500 times.

Similar Messages

  • Issue with af:iterator

    Hi,  I am using JDeveloper 11g Release 2(11.1.2.4.0) for ADF developement.
    As per my requirement i need to dispaly af:inputText based on DB value. for this i am using af:iterator component. For example, In my DB table having 4 records so af:iterator component creates 4 af:inputText with DB value.
    But my problem is when i type some value in 3rd af:inputText, its taking 1st af:inputText. that means when i get current row from viewObject  its giving first row only.
    Pls help me on this.

    When you stamp components with af:iterator there is no selectionListener(like in af:table component) so you can't select current row.
    Here is some solution: ADF Diary: Setting Current Row in af:iterator   (but I'm not sure if this actually works )
    Another option is to use javascript and af:clientListener/af:serverListener combination to select current row when user click on some inputText field.
    Dario

  • Anyone else having an issue with TCP connections using iCloud for Windows?

    Hi,
    Before I asked this question, I did wait to see if any related questions came up, but none did, so I submit it now.
    On my admittedly older laptop running Windows 7 64b Home, I've run into difficulties with the iCloud for Windows app to the extent that I had to uninstall it.
    It would that, as my laptop was running, in the background, iCloudServices.exe would endlessly iterate TCP connections, which, while not actively sending or receiving any data, after some hours would number over 100 instances, taking up resources, and grinding my laptop's WiFi connection to a grindingly slow pace. I ended up, within the app, turning off everything, iCloud Drive and Photos, (I never used bookmarks), but still this would continue to occur.
    I contacted Apple Support, explaining what was going on, and they stated they only dealt with IOS and gave me a Microsoft Support number. When I called Microsoft support, I came more and more to the realization that the issue was specifically with the iCloud for Windows app, as that was the only software that was endlessly creating and not closing TCP connections as it was. How was Microsoft supposed to solve an issue with Apple code?
    So I called Apple back, whereupon they insisted it was a Microsoft issue. I explained other cloud services installed on the same computer were not having the same issue, it was unique to ICloudServices.exe. They stated they only dealt with IOS. I stated I purchased an iPad Air less than 7 months ago, and was trying to run iCloud in support of that.  They again stated they only dealt with IOS, and suggested I again try Microsoft. I asked them if it was reasonable to expect Microsoft to solve issues with Apple code? They said regardless, there was zero support offered for anything having to do with Windows, and all I could do was uninstall the app, which I did, though that did not feel very satisfactory to me. My thinking is, if Apple writes a Windows app in support of their hardware, they should offer support for it.
    Anyway, I was just wondering, is this an issue unique to me? or have others experienced a similar issue? I found this issue by opening the Windows Resource Monitor, looking under the Networking tab, and scrolling through the TCP Connections section to find 100+ concurrent iCloudServices.exe instances listed, whereas even Chrome, with multiple tabs and extensions, topped out at around 20.
    My one month old Desktop, DYI, sports a solid Asus 1150 MoBo, i7-4790k cpu, 16GB Ram, and an EVGA GTX 970 video card. I list some specs only to illustrate this computer has no hardware issues in comparison to my long in tooth laptop. On this desktop, running Win 8.1 Pro 64b,  at least as many, identifiably Apple, background service TCP connections are created even compared to Chrome, regardless of many tabs being open, many extensions, and even some related apps. Adobe does not even come close, though I run the full CC subscription. On this new computer, running Windows 8.1 Pro 64b, there are currently over 50 TCP connections and loopbacks that do not identify themselves, with just a - for the Image, and PID. With the experience on my laptop, I wonder how many of these are generated by Apple software, if not specifically iCloud software?
    The frustrating aspect of these connections is they seem in no way active, While the Chrome and Adobe connections can be seen to be transferring data, as long as I am not running iTunes, or so have my iPad actually plugged in, it seems 99% of the time these iCloudServices.exe connections are just taking up ports, neither sending nor receiving any data discernable to me under the Processes with Network Activity, or Network Activity lists, both displayed in the same window as the TCP Connections in the Windows Resource Monitor.
    Though I am fairly ignorant as regards coding, it seems as if there is no call to close a connection, very specifically, iCloudServices.exe, when it is no longer needed, and the next time a connection is needed, a new one is opened, rather than accessing the one previously opened. The only other reason I could imagine this might be occurring is if my Norton Internet Security software might mask and/or block the port after a certain time of inactivity.
    Anyone out there have any ideas or advice about this? Thanks in advance.

    Thanks jared,
    I'm still dealing with this issue through Apple. Some time after I posted this, I contacted Apple again. They did start a case up for me, as I was experiencing the same behavior on two different machines, with two different versions of Windows.
    So far it remains unsolved. I've logged iClouds for Windows on my desktop, which is brand new, then logged for awhile after completely uninstalling Norton Security Suite, depending on the Microsoft security for some time, and finally logged after I uninstalled iCloud for Windows, restarted, installed a clean download, and connected using a completely different test account, which Apple set up for me. None of this made any difference. Looking at the logs, it seems every 10 minutes, iCloudServices.exe creates a new TCP connection to confirm I'm using less than 5GB on iCloud, (which I am by a good margin, using less than 2GB), it seems this connection is not closed, and when the next iteration rolls around 10 minutes later, a new TCP connection is created. I come very close to having 6 TCP connections created per hour, until I restart my computer. This works out to... 6 x 24 = 144/day.
    Perhaps the article you posted will shed some further light on this. I'm thinking seeing the state of the connection through netstats, at the least, could help.
    For the last week, I've been putting a hold on further logging, as Apple wants me to create a new user account on one of my computers, install iCloud for Windows there, and log it running in the other account. This however basically means I cannot use my computer for a fair number of hours, and I've been busy enough with work the past week that I haven't the time or energy to afford to set this up and run it. I've had need of my computers too much for the past week.

  • Issue with the partialTrigger on ADF Table

    Jdeveloper Version 11.1.2.3.0
    I have replicated issue with partialTrigger on the table component. Sample application can be downloaded from here . It needs HR schema to run.
    In below sample pageFragment, I can try refreshing adf table in two ways
    1. Set addEmployee button's id in partialTrigger of ADFTable component.
    2. Set addEmployee button's id in partialTrigger of PanelBox component.
    Note the difference - 1st one doesn't work where as 2nd works fine. Do we have any additional constraints when refreshing ADF table using
    partialTrigger ?
    I replicated the usecase in below example :
    PageFragment Structure -
    PanelBox                         
    |                              
    |__ ADF Table                         
    |                              
    |__toolbar facet                    
    |                              
    |__ addEmployee button                
    PageFragment Code
    <af:panelBox text="PanelBox2" id="pb1">
    <f:facet name="toolbar">
    <af:commandButton actionListener="#{bindings.addEmployee.execute}" text="addEmployee2"
    disabled="#{!bindings.addEmployee.enabled}" id="cb1" partialSubmit="true"/>
    </f:facet>
    <af:table value="#{bindings.EmployeesView1.collectionModel}" var="row" rows="#{bindings.EmployeesView1.rangeSize}"
    emptyText="#{bindings.EmployeesView1.viewable ? 'No data to display.' : 'Access Denied.'}"
    fetchSize="#{bindings.EmployeesView1.rangeSize}" rowBandingInterval="0"
    selectedRowKeys="#{bindings.EmployeesView1.collectionModel.selectedRow}"
    selectionListener="#{bindings.EmployeesView1.collectionModel.makeCurrent}" rowSelection="single" id="t1"
    displayRow="selected" partialTriggers="::cb1" styleClass="AFStretchWidth">
    <af:column sortProperty="#{bindings.EmployeesView1.hints.EmployeeId.name}" sortable="false"
    headerText="#{bindings.EmployeesView1.hints.EmployeeId.label}" id="c1">
    <af:inputText value="#{row.bindings.EmployeeId.inputValue}"
    label="#{bindings.EmployeesView1.hints.EmployeeId.label}"
    required="#{bindings.EmployeesView1.hints.EmployeeId.mandatory}"
    columns="#{bindings.EmployeesView1.hints.EmployeeId.displayWidth}"
    maximumLength="#{bindings.EmployeesView1.hints.EmployeeId.precision}"
    shortDesc="#{bindings.EmployeesView1.hints.EmployeeId.tooltip}" id="it1">
    <f:validator binding="#{row.bindings.EmployeeId.validator}"/>
    <af:convertNumber groupingUsed="false" pattern="#{bindings.EmployeesView1.hints.EmployeeId.format}"/>
    </af:inputText>
    </af:column>
    <af:column sortProperty="#{bindings.EmployeesView1.hints.FirstName.name}" sortable="false"
    headerText="#{bindings.EmployeesView1.hints.FirstName.label}" id="c2">
    <af:inputText value="#{row.bindings.FirstName.inputValue}"
    label="#{bindings.EmployeesView1.hints.FirstName.label}"
    required="#{bindings.EmployeesView1.hints.FirstName.mandatory}"
    columns="#{bindings.EmployeesView1.hints.FirstName.displayWidth}"
    maximumLength="#{bindings.EmployeesView1.hints.FirstName.precision}"
    shortDesc="#{bindings.EmployeesView1.hints.FirstName.tooltip}" id="it2">
    <f:validator binding="#{row.bindings.FirstName.validator}"/>
    </af:inputText>
    </af:column>
    <af:column sortProperty="#{bindings.EmployeesView1.hints.LastName.name}" sortable="false"
    headerText="#{bindings.EmployeesView1.hints.LastName.label}" id="c3">
    <af:inputText value="#{row.bindings.LastName.inputValue}"
    label="#{bindings.EmployeesView1.hints.LastName.label}"
    required="#{bindings.EmployeesView1.hints.LastName.mandatory}"
    columns="#{bindings.EmployeesView1.hints.LastName.displayWidth}"
    maximumLength="#{bindings.EmployeesView1.hints.LastName.precision}"
    shortDesc="#{bindings.EmployeesView1.hints.LastName.tooltip}" id="it3">
    <f:validator binding="#{row.bindings.LastName.validator}"/>
    </af:inputText>
    </af:column>
    <af:column sortProperty="#{bindings.EmployeesView1.hints.DepartmentId.name}" sortable="false"
    headerText="#{bindings.EmployeesView1.hints.DepartmentId.label}" id="c11">
    <af:selectOneChoice value="#{row.bindings.DepartmentId.inputValue}" label="#{row.bindings.DepartmentId.label}"
    required="#{bindings.EmployeesView1.hints.DepartmentId.mandatory}"
    shortDesc="#{bindings.EmployeesView1.hints.DepartmentId.tooltip}" id="soc1">
    <f:selectItems value="#{row.bindings.DepartmentId.items}" id="si1"/>
    </af:selectOneChoice>
    </af:column>
    </af:table>
    </af:panelBox>
    Thanks,
    Rajdeep

    Hi Frank,
    Indeed it worked. But I have two queries now :
    1. We are adding employee record using a method called through method action Binding. So shouldn't the bindings be aware of the same .. i mean sychonization of binding layer should happen when method action binding is used ?
    2. Why it works when i apply partialTrigger on panelBox ? Why "employeesViewImpl" code is not required when I apply partialTrigger on panelBox ? Is it a concept that iterator is reexecuted when you refresh parent component ?
    Thanks,
    Rajdeep

  • Issue with SRDemo error handling

    Hi All,
    Glad the forums are back up and running. In debugging some error-handling issues in our own application, I found an issue in the error handling code of SRDemo. I thought I'd post the issue here, as many of us (myself included) use some SRDemo code as the basis for our own applications.
    The issue can be found in the oracle.srdemo.view.frameworkExt.SRDemoPageLifecycle class, specifically in the translateExceptionToFacesErrors method. I'll show the code that has the issue first, and explain the issue afterwards:
            if (numAttr > 0) {
                Iterator i = attributeErrors.keySet().iterator();
                while (i.hasNext()) {
                    String attrNameKey = (String)i.next();
                     * Only add the error to show to the user if it was related
                     * to a field they can see on the screen. We accomplish this
                     * by checking whether there is a control binding in the current
                     * binding container by the same name as the attribute with
                     * the related exception that was reported.
                    ControlBinding cb =
                        ADFUtils.findControlBinding(bc, attrNameKey);
                    if (cb != null) {
                        String msg = (String)attributeErrors.get(attrNameKey);
                        if (cb instanceof JUCtrlAttrsBinding) {
                            attrNameKey = ((JUCtrlAttrsBinding)cb).getLabel();
                        JSFUtils.addFacesErrorMessage(attrNameKey, msg);
                }Now, this bit of code attempts to be "smart" and only show error messages relating to attributes if those attributes are in fact displayed on the screen. It does so by using a utility method to find a control binding for the attribute name. There are two issues with this code, one obvious, and one that is a bit more subtle.
    The obvious issue: if there is a binding in the page definition, it doesn't necessarily mean that the attribute is shown on the screen. It's a good approximation, but not exact.
    The other issue is more subtle, and led to errors being "eaten," or not shown, in our application. The issue comes if you are using an af:table to display and update your data. In that case, the findControlBinding will not find anything for that attribute, since the attribute is contained within a table binding.
    Just posting this as a word to the wary.
    Best,
    john

    somehow, this message got in the wrong thread....
    Hi Frank,
    Yes, I simply scripted it out this way to contrast the behaviour if the first attribute was read-only vs not read-only. I found the issue on a page in our app that was simply drag-and-drop the VO from the data control on the page.
    It's quite annoying, because our particular use case that hit this error is a "save" button on the page. If the commit operation doesn't return any errors (and it doesn't in this use case!), we add a JSF message saying "save successful" - then the attribute errors are further added later in the page lifecycle, so we get 3 messages: "Save successful" and "Fix this error" and "Tried to set read-only attribute" - quite confusing to the end-user when the only message they should see is "fix this error."
    At any rate, the fix is to simply re-order the attributes in the page definition - that doesn't affect the UI at all, other than to fix this issue.
    John
    it was supposed to be something like:
    Hi Frank,
    Thanks for the reply. I was simply posting this here so that people who use the SRDemo application techniques as a basis for developing the same functionality in their own apps (like me) can be aware of the issue, and avoid lots of head-scratching to figure out "what happened to the error message?"
    John

  • Issues with nested for loops - saving images from a camera

    Hi all,
    I've written a vi. to capture a specific number of images ('Image No') and save these images, outputted to a folder of my choice.  Each image is identified sequentially.  However, I wish to do a number of iterations ('Run') of this capture sequence, such that the filename of each image would be 'Filename (Run)_(Image No).png', e.g. run 5, image 10 would be 'Filename 5_10.png'.  I have tried a nested for loop for this but I receive an error 'Asynchronous I/O operation in progress' (I've attached a printscreen).
    Can anyone assist me in solving this problem? I preiously posted this in machine Vision but got no response (http://forums.ni.com/t5/Machine-Vision/Capturing-image-sequences-issues-with-nested-for-loops/m-p/19...).  Please find attached my vi.
    Kindest regards and thanks,
    Miika
    Solved!
    Go to Solution.
    Attachments:
    Labview problem.jpg ‏3841 KB
    Image sequence save to file.vi ‏48 KB

    Miika,
    the problem is not the filenam, but the name of the folder (AHHHHH!). You try to create the same folder in the outer for loop over and over again.... (it is the error message above the '======', not below )
    Norbert
    CEO: What exactly is stopping us from doing this?
    Expert: Geometry
    Marketing Manager: Just ignore it.

  • Issue with "read by other session" and a parallel MERGE query

    Hi everyone,
    we have run into an issue with a batch process updating a large table (12 million rows / a few GB, so it's not that large). The process is quite simple - load the 'increment' from a file into a working table (INCREMENT_TABLE) and apply it to the main table using a MERGE. The increment is rather small (usually less than 10k rows), but the MERGE runs for hours (literally) although the execution plan seems quite reasonable (can post it tomorrow, if needed).
    The first thing we've checked is AWR report, and we've noticed this:
    Top 5 Timed Foreground Events
    Event     Waits     Time(s)     Avg wait (ms)     % DB time     Wait Class
    DB CPU           10,086           43.82     
    read by other session     3,968,673     9,179     2     39.88     User I/O
    db file scattered read     1,058,889     2,307     2     10.02     User I/O
    db file sequential read     408,499     600     1     2.61     User I/O
    direct path read     132,430     459     3     1.99     User I/OSo obviously most of the time was consumed by "read by other session" wait event. There were no other queries running at the server, so in this case "other session" actually means "parallel processes" used to execute the same query. The main table (the one that's updated by the batch process) has "PARALLEL DEGREE 4" so Oracle spawns 4 processes.
    I'm not sure how to fix this. I've read a lot of details about "read by other session" but I'm not sure it's the root cause - in the end, when two processes read the same block, it's quite natural that only one does the physical I/O while the other waits. What really seems suspicious is the number of waits - 4 million waits means 4 million blocks, 8kB each. That's about 32GB - the table has about 4GB, and there are less than 10k rows updated. So 32 GB is a bit overkill (OK, there are indexes etc. but still, that's 8x the size of the table).
    So I'm thinking that the buffer cache is too small - one process reads the data into cache, then it's removed and read again. And again ...
    One of the recommendations I've read was to increase the PCTFREE, to eliminate 'hot blocks' - but wouldn't that make the problem even worse (more blocks to read and keep in the cache)? Or am I completely wrong?
    The database is 11gR2, the buffer cache is about 4GB. The storage is a SAN (but I don't think this is the bottleneck - according to the iostat results it performs much better in case of other batch jobs).

    OK, so a bit more details - we've managed to significantly decrease the estimated cost and runtime. All we had to do was a small change in the SQL - instead of
    MERGE /*+ parallel(D DEFAULT)*/ INTO T_NOTUNIFIED_CLIENT D /*+ append */
      USING (SELECT
          FROM TMP_SODW_BB) S
      ON (D.NCLIENT_KEY = S.NCLIENT_KEY AND D.CURRENT_RECORD = 'Y' AND S.DIFF_FLAG IN ('U', 'D'))
      ...(which is the query listed above) we have done this
    MERGE /*+ parallel(D DEFAULT)*/ INTO T_NOTUNIFIED_CLIENT D /*+ append */
      USING (SELECT
          FROM TMP_SODW_BB AND DIFF_FLAG IN ('U', 'D')) S
      ON (D.NCLIENT_KEY = S.NCLIENT_KEY AND D.CURRENT_RECORD = 'Y')
      ...i.e. we have moved the condition from the MERGE ON clause to the SELECT. And suddenly, the execution plan is this
    OPERATION                           OBJECT_NAME             OPTIONS             COST
    MERGE STATEMENT                                                                 239
      MERGE                             T_NOTUNIFIED_CLIENT
        PX COORDINATOR
          PX SEND                       :TQ10000                QC (RANDOM)         239
            VIEW
              NESTED LOOPS                                      OUTER               239
                PX BLOCK                                        ITERATOR
                  TABLE ACCESS          TMP_SODW_BB             FULL                2
                    Filter Predicates
                      OR
                        DIFF_FLAG='D'
                        DIFF_FLAG='U'
                  TABLE ACCESS          T_NOTUNIFIED_CLIENT       BY INDEX ROWID    3
                    INDEX               AK_UQ_NOTUNIF_T_NOTUNI    RANGE SCAN        2
                      Access Predicates
                        AND
                          D.NCLIENT_KEY(+)=NCLIENT_KEY
                          D.CURRENT_RECORD(+)='Y'
                      Filter Predicates
                        D.CURRENT_RECORD(+)='Y' Yes, I know the queries are not exactly the same - but we can fix that. The point is that the TMP_SODW_BB table contains 1639 rows in total, and 284 of them match the moved 'IN' condition. Even if we remove the condition altogether (i.e. 1639 rows have to be merged), the execution plan does not change (the cost increases to about 1300, which is proportional to the number of rows).
    But with the original IN condition (that turns into an OR combination of predicates) in the MERGE ON clausule, the cost suddenly skyrockets to 990.000 and it's damn slow. It seems like a problem with cost estimation, because once we remove one of the values (so there's only one value in the IN clausule), it works fine again. So I guess it's a planner/estimator issue ...

  • Adobe Reader X:Recurring issues with print and save requiring reinstalls

    One of our staff has recurring issues with PDF print, view, save.  These issue are temporarily resolved by re-installing the Reader from the Adobe website.  No error messages are displayed - the functions just cease to work.  In the most recent iteration of the issue a document attached to an email could be saved (right mouse click, save as) within the email program but not within the Reader itself. 
    It appears that Reader is getting corrupted - but by what?  This individual does a lot of downloading of web published PDFs - a possible source but one I can't confirm. 
    Information on a permanent resolution or known cause would be greatly appreciated.

    Thanks v much Twilight - once I had posted the question I saw the same issues were posted by another.
    I'll try the print as image under the advanced print tab and see how I get on with that. Hopefully Adobe will bring out a fix soon.
    Thanks again.
    Cattswood

  • Issue with "firstRecord" Business Component method of JAVA Data bean API.

    Hi,
    Following is my use-case scenario:
    I have to add or associate child MVG business component (CUT_Address)
    with the parent business component (Account) using JAVA Data bean API.
    My requirement is: first to check whether child business component(i.e. CUT_address) exists. If it exists then associate it with parent business component (Account)
    otherwise create new CUT_address and associate it with account.
    Code (using JAVA Data bean APIs) Goes as follows:
    SiebelBusObject sBusObj = connBean.getBusObject("Account");
    parentBusComp = sBusObj.getBusComp("Account");
    SiebelBusComp parentBusComp;
    SiebelBusComp childBusComp;
    // retrieve required account.. Please assume Account1 exists
    parentBusComp.activateField("Name");
    parentBusComp.clearToQuery();
    parentBusComp.setSearchSpec("Name", "Account1");
    sBusComp.executeQuery2(true, true);
    sBusComp.firstRecord();
    Counter = 0;
    while (counter < Number_Of_Child_Records_To_Insert)
    childBusComp = parentBusComp.getMVGBusComp("City");
    associatedChildBusComp = childBusComp.getAssocBusComp();
    childBusComp.activateField("City");
    childBusComp.clearToQuery();
    childBusComp.setSearchSpec("City", Vector_of_city[counter]);
    sBusComp.executeQuery2(true, true);
    if( sBusComp.firstRecord() )
    // Child already exist and do processing accordingly
    else
    // Child does not exist and do processing accordingly
    childBusComp.release();
    childBusComp = null;
    associatedChildBusComp.release();
    associatedChildBusComp=null;
    Now the issue with this code is: For the first iteration, SbusComp.firstRecord returns 0 if records does not exist. However from the second iteration, SbusComp.firstRecord returns 1 even if there is no record matching the search specification.
    Any input towards the issue is highly appreciable.
    Thanks,
    Rohit.

    Setting the view mode to "AllView" helped.
    Thanks for the lead!
    In the end, I also had to invoke the business component method SetAdminMode with "true" as the argument so that I could also modify the records from my script.

  • Oracle RDF / Joseki : issue with large literals

    Hi,
    I have been using Joseki to query an Oracle RDF model. There seems to be an issue with large literals (according to a few unreliable tests, I would say this concerns literals around and over 4000 chars).
    Here are the two potential behaviours :
    First case:
    If the results contains several lines, one of which contains an overly large literal, there are NO exception on the server side, but the resulting xml is incomplete.
    It misses the "line" containing the large literal, and the xml is stopped there (which means that it also misses the closing </results> and </sparql>. In my case, I am using the results through Jena's sparqlService, which means I get this message :
    XMLStreamException: Unexpected EOF; was expecting a close tag for element <results>
    +at [row,col {unknown-source}]: [31,0]+
    Second case:
    If the query only returns one line which contains an overly large literal, the client receives a simple *"HttpException: 500 Internal Server Error"*
    Here is the error message from my server :
    +INFO [[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'] (SPARQL.java:165) - Throwable: we+
    blogic.jdbc.wrapper.Clob_oracle_sql_CLOB cannot be cast to oracle.sql.CLOB
    java.lang.ClassCastException: weblogic.jdbc.wrapper.Clob_oracle_sql_CLOB cannot be cast to oracle.sql.CLOB
    at oracle.spatial.rdf.client.jena.OracleSemIterator.getNodesFromResultSet(OracleSemIterator.java:579)
    at oracle.spatial.rdf.client.jena.OracleSemIterator.next(OracleSemIterator.java:445)
    at oracle.spatial.rdf.client.jena.OracleLeanQueryIter.moveToNextBinding(OracleLeanQueryIter.java:135)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.nextBinding(QueryIteratorBase.java:98)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIterConvert.moveToNextBinding(QueryIterConvert.java:56)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.nextBinding(QueryIteratorBase.java:98)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIterRepeatApply.moveToNextBinding(QueryIterRepeatApply.java:76)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.nextBinding(QueryIteratorBase.java:98)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:54
    +)+
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:69)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIterConvert.hasNextBinding(QueryIterConvert.java:50)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:69)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorWrapper.hasNextBinding(QueryIteratorWrapper.java:30)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:69)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorWrapper.hasNextBinding(QueryIteratorWrapper.java:30)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:69)
    at com.hp.hpl.jena.sparql.engine.ResultSetStream.hasNext(ResultSetStream.java:62)
    at org.joseki.processors.SPARQL.executeQuery(SPARQL.java:309)
    at org.joseki.processors.SPARQL.execQueryWorker(SPARQL.java:288)
    at org.joseki.processors.SPARQL.execQueryProtected(SPARQL.java:126)
    at org.joseki.processors.SPARQL.execOperation(SPARQL.java:120)
    at org.joseki.processors.ProcessorBase.exec(ProcessorBase.java:112)
    at org.joseki.ServiceRequest.exec(ServiceRequest.java:36)
    at org.joseki.Dispatcher.dispatch(Dispatcher.java:59)
    at org.joseki.http.Servlet.doCommon(Servlet.java:177)
    at org.joseki.http.Servlet.doGet(Servlet.java:138)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
    at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
    at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
    at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
    at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    Would there be any fix / workaround ?
    Please let me know if you need further information / tests.
    Thanks,
    Regards,
    Julien

    Thanks for your reply.
    While trying to build up a small test case, I found out why there were discrepancies between the two cases I described.
    Indeed, usually, the two cases return the same thing (no exception on the server side, but incomplete resulting XML).
    The exception I described happened when I tried something else. Since I saw that issues were coming from long literals, I used fn:string-length (ARQ) to figure out how long they were.
    The test case resulting in the CLOB-cast-exception is:
    - too large literal
    - only one result "line" containing this literal
    - usage of fn:string-length (which does not change the behaviour in other cases (i.e. no long literals or/and several lines).
    Anyway, you will receive the other test cases shortly.
    Thanks,
    Regards,
    Julien

  • W540 and Ultradock Display Port Issue with Monoprice 27" Monitor

    Hi,
    I have been troubleshooting an issue with my W540 and Ultradock for over a month now. 
    The issue is that the display port will not recognize my Monoprice 27" monitor.  This same monitor was working previously with my W520 and a different dock (ThinkPad Mini Dock Plus Series 3) through the same display adapter.  The current monitor will work on the DVI port, but I obviously can't get the full resolution of the monitor without the display port adapter.
    So far I have tried:
    1) Updating the drivers of the laptop
    2) Updating the drivers of the dock itself to 2.22
    3) Power cycling the monitor with the PC connected, not connected, etc.
    4) Switching between Standard/Advanced mode in the BIOS
    5) Nearly every iteration of monitor detection in the screen resolution settings
    6) Changing cables, adapters, and display ports
    What is different in my scenario (based off of the other issues I've seen with the Ultradock and W540) is that the monitor is never even detected under any circumstance through the display port.  Device manager never sees it, nor does the normal monitor detection procedure. 
    Any help would be greatly appreciated.
    Best,
    Austin

    Any ideas?
    I am about to throw this in the trash...  It's now defaulting my DVI port monitor to 640x480 without the option to change it.
    Moderator comment: Post edited to conform with the Community Rules. Keep it clean.

  • Flex file upload issue with large image files

         Hello, I have created a sample flex application to upload an image and also created java servlet to upload and save image and deployed in local tomcat server. I am testing the application in LAN. I am able to upload small as well as large image file(1Mb) from some PCs but in some other PCs I am getting IOError while uploading large image files however it is working fine for small images. Image uploading is hanging after 10%-20% and throwing IOError. *Surprizgly it is working Ok with XP systems and causeing issues with Windows7 systems*.
    Plz give me any idea to get a solution.
    In Tomcat server side it is giving following error:
    request: org.apache.catalina.connector.RequestFacade@c19694
    org.apache.commons.fileupload.FileUploadBase$IOFileUploadException: Processing of multipart/form-data request failed. Stream ended unexpectedly
            at org.apache.commons.fileupload.FileUploadBase.parseRequest(FileUploadBase.java:371)
            at org.apache.commons.fileupload.servlet.ServletFileUpload.parseRequest(ServletFileUpload.ja va:126)
            at flex.servlets.UploadImage.doPost(UploadImage.java:47)
            at javax.servlet.http.HttpServlet.service(HttpServlet.java:637)
            at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
            at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.j ava:290)
            at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
            at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
            at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
            at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
            at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
            at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
            at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
            at org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:877)
            at org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProto col.java:594)
            at org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:1675)
            at java.lang.Thread.run(Thread.java:722)
    Caused by: org.apache.commons.fileupload.MultipartStream$MalformedStreamException: Stream ended unexpectedly
            at org.apache.commons.fileupload.MultipartStream$ItemInputStream.makeAvailable(MultipartStre am.java:982)
            at org.apache.commons.fileupload.MultipartStream$ItemInputStream.read(MultipartStream.java:8 86)
            at java.io.InputStream.read(InputStream.java:101)
            at org.apache.commons.fileupload.util.Streams.copy(Streams.java:96)
            at org.apache.commons.fileupload.util.Streams.copy(Streams.java:66)
            at org.apache.commons.fileupload.FileUploadBase.parseRequest(FileUploadBase.java:366)
    UploadImage.java:
    package flex.servlets;
    import java.io.*;
    import java.sql.*;
    import java.util.*;
    import java.text.*;
    import java.util.regex.*;
    import org.apache.commons.fileupload.servlet.ServletFileUpload;
    import org.apache.commons.fileupload.disk.DiskFileItemFactory;
    import org.apache.commons.fileupload.*;
    import sun.reflect.ReflectionFactory.GetReflectionFactoryAction;
    import javax.servlet.*;
    import javax.servlet.http.*;
    public class UploadImage extends HttpServlet{
             * @see HttpServlet#doGet(HttpServletRequest request, HttpServletResponse
             *      response)
            protected void doGet(HttpServletRequest request,
                            HttpServletResponse response) throws ServletException, IOException {
                    // TODO Auto-generated method stub
                    doPost(request, response);
            public void doPost(HttpServletRequest request,
                            HttpServletResponse response)
            throws ServletException, IOException {
                    PrintWriter out = response.getWriter();
                    boolean isMultipart = ServletFileUpload.isMultipartContent(
                                    request);
                    System.out.println("request: "+request);
                    if (!isMultipart) {
                            System.out.println("File Not Uploaded");
                    } else {
                            FileItemFactory factory = new DiskFileItemFactory();
                            ServletFileUpload upload = new ServletFileUpload(factory);
                            List items = null;
                            try {
                                    items = upload.parseRequest(request);
                                    System.out.println("items: "+items);
                            } catch (FileUploadException e) {
                                    e.printStackTrace();
                            Iterator itr = items.iterator();
                            while (itr.hasNext()) {
                                    FileItem item = (FileItem) itr.next();
                                    if (item.isFormField()){
                                            String name = item.getFieldName();
                                            System.out.println("name: "+name);
                                            String value = item.getString();
                                            System.out.println("value: "+value);
                                    } else {
                                            try {
                                                    String itemName = item.getName();
                                                    Random generator = new Random();
                                                    int r = Math.abs(generator.nextInt());
                                                    String reg = "[.*]";
                                                    String replacingtext = "";
                                                    System.out.println("Text before replacing is:-" +
                                                                    itemName);
                                                    Pattern pattern = Pattern.compile(reg);
                                                    Matcher matcher = pattern.matcher(itemName);
                                                    StringBuffer buffer = new StringBuffer();
                                                    while (matcher.find()) {
                                                            matcher.appendReplacement(buffer, replacingtext);
                                                    int IndexOf = itemName.indexOf(".");
                                                    String domainName = itemName.substring(IndexOf);
                                                    System.out.println("domainName: "+domainName);
                                                    String finalimage = buffer.toString()+"_"+r+domainName;
                                                    System.out.println("Final Image==="+finalimage);
                                                    File savedFile = new File(getServletContext().getRealPath("assets/images/")+"/LowesFloorPlan.png");
                                                    //File savedFile = new File("D:/apache-tomcat-6.0.35/webapps/ROOT/example/"+"\\test.jpeg");
                                                    item.write(savedFile);
                                                    out.println("<html>");
                                                    out.println("<body>");
                                                    out.println("<table><tr><td>");
                                                    out.println("");
                                                    out.println("</td></tr></table>");
                                                    try {
                                                            out.println("image inserted successfully");
                                                            out.println("</body>");
                                                            out.println("</html>");  
                                                    } catch (Exception e) {
                                                            System.out.println(e.getMessage());
                                                    } finally {
                                            } catch (Exception e) {
                                                    e.printStackTrace();

    It is only coming in Windows 7 systems and the root of this problem is SSL certificate.
    Workaround for this:
    Open application in IE and click on certificate error link at address bar . Click install certificate and you are done..
    happy programming.
    Thanks
    DevSachin

  • Annoying issue with Logger

    When I tell my FileHandler to append to an existing XML log file, it adds the follow two lines to the log file every time:
    <?xml version="1.0" encoding="windows-1252" standalone="no"?>
    <!DOCTYPE log SYSTEM "logger.dtd">
    This means that, if I have had hundreds of iterations of my loop, i get hundreds of XML headers in my log file, which seems asinine (not to mention the fact that a new LOG tag is created every time too). Is there anyway to prevent this from happening?

    I'm sure you'll just take issue with my response as well, based on a topic from the other day...
    but I'd have to say it sounds like you're using loggers for the wrong reason, if you think you need to append to an existing XML-based one. Maybe you should be logging to a flat text-based one instead, or shouldn't be integrating with (and dependent on) logger output to the degree that you are. There's probably a different design approach needed, but of course I'm not all that involved in your project. Just trying to point out that you may need to revisit the design instead of sticking with it the way you are.

  • Common issues with BW environment

    Hello Experts,
    could you please mention all teh common issues faced in BW environment with the solution available.
    I have many issues and I wante dto know whether all teh teams face the same.
    This would a good oppurtunity to have everything in one thread.
    Thanks and regards
    meps

    Hi,
    We will have somany issues plz check some of them
    1. DTP Failure
    Select the step-> right click and select “Display Message”-> there we will get the message which gives the reason for ABEND.
    A DTP can failure due to following reasons, in such case we can go for restarting the job.
    System Exception Error
    Request Locked
    ABAP Run time error.
    Duplicate records
    Erroneous Records from PSA.
    Duplicate records:            In case of duplication in the records, we can find it in the error message along with the Info Provider’s name. Before restarting the job after deleting the bad DTP request, we have to handle the duplicate records. Go to the info provider -> DTP step -> Update tab -> check handle duplicate records -> activate -> Execute DTP. After successful competition of the job uncheck the Handle Duplicate records option and activate. DTP Log Run:
    If a DTP is taking log time than the regular run time without having the back ground job, then we have to turn the status of the DTP into Red and then delete the DTP bad request (If any), repeat the step or restart the job.
    Before restarting the Job/ repeating the DTP step, make sure about the reason for failure.
    If the failure is due to “Space Issue” in the F fact table, engage the DBA team and also BASIS team and explain them the issue. Table size needs to be increased before performing any action in BW. It’ll be done by DBA Team. After increasing the space in the F fact table we can restart the job.
    Erroneous Records from PSA:      When ever a DTP fails because of erroneous records while fetching the data from PSA to Data Target, in such cases data needs to be changed in the ECC. If it is not possible, then after getting the approval from the business, we can edit the Erroneous records in PSA and then we have to run the DTP. Go to PSA -> select request -> select error records -> edit the records and save.Then run the DTP. 2.      INFO PACKAGE FAILURE: The following are the reasons for Info Pack failure.
    Source System Connection failure
    tRFC/IDOC failure
    Communication Issues
    Processing the IDOC Manually in BI
    Check the source system connection with the help of SAP BASIS, if it is not fine ask them to rebuild the connection. After that restart the job (Info Pack).
    Go to RSA1 -> select source system -> System -> Connection check.
    In case of any failed tRFC’s/IDOC’s, the error message will be like “Error in writing the partition number DP2” or “Caller 01, 02 errors”. In such case reprocess the tRFC/IDOC with the help of SAP BASIS, and then job will finish successfully.
    If the data is loading from the source system to DSO directly, then delete the bad request in the PSA table, then restart the job
    Info Pack Long Run: If an info pack is running long, then check whether the job is finished at source system or not. If it is finished, then check whether any tRFC/IDOC struck/Failed with the help of SAP BASIS. Even after reprocessing the tRFC, if the job is in yellow status then turn the status into “Red”. Now restart / repeat the step. After completion of the job force complete.
    Before turning the status to Red/Green, make sure whether the load is of Full/Delta and also the time stamp is properly verified.
    Time Stamp Verification:
    Select Info Package-> Process Monitor -> Header -> Select Request -> Go to source System (Header->Source System) -> Sm37-> give the request and check the status of the request in the source system -> If it is in active, then we have to check whether there any struck/failed tRFC’s/IDOC’s If the request is in Cancelled status in Source system -> Check the Info Pack status in BW system -> If IP status is also in failed state/cancelled state -> Check the data load type (FULL or DELTA) -> if the status is full then we can turn the Info Package status red and then we can repeat/restart the Info package/job. -> If the load is of Delta type then we have to go RSA7 in source system-> (Compare the last updated time in Source System SM37 back ground job)) Check the time stamp -> If the time stamp in RSA7 is matching then turn the Info Package status to Red -> Restart the job. It’ll fetch the data in the next iterationIf the time stamp is not updated in RSA7 -> Turn the status into Green -> Restart the job. It’ll fetch the data in the next iteration.
    Source System
    BW System
    Source System RSA7
    Source System SM37
    Action
    I/P Status RED(Cancelled)
    I/P Status (Active)
    Time stamp matching with SM37 last updated time
    Time stamp matching with RSA7 time stamp
    Turn the I/P Status into Red and Restart the Job
    I/P Status RED(Cancelled)
    I/P Status (Cancelled)
    Time stamp matching with SM37 last updated time
    Time stamp matching with RSA7 time stamp
    Turn the I/P Status into Red and Restart the Job
    I/P Status RED(Cancelled)
    I/P Status (Active)
    Time stamp is not  matching with SM37 last updated time
    Time stamp is not matching with RSA7 time stamp
    Turn the I/P status into Green and Restart the job
    I/P Status RED(Cancelled)
    I/P Status (Cancelled)
    Time stamp is not  matching with SM37 last updated time
    Time stamp is not matching with RSA7 time stamp
    Turn the I/P status into Green and Restart the job
    Processing the IDOC Manually in BI:
    When there is an IDOC which is stuck in the BW and successfully completed the background job in the source system, in such cases we can process the IDOC manually in the BW. Go to Info Package -> Process Monitor -> Details -> select the IDOC which is in yellow status(stuck) -> Right click -> Process the IDOC manually -> it’ll take some time to get processed.******Make sure that we can process the IDOC in BW only when the back ground job is completed in the source system and stuck in the BW only.   3.      DSO Activation Failure: When there is a failure in DSO activation step, check whether the data is loading to DSO from PSA or from the source system directly. If the data is loading to DSO from PSA, then activate the DSO manually as follows
    Right click DSO Activation Step -> Target Administration -> Select the latest request in DSO -> select Activate -> after request turned to green status, Restart the job.
    If the data is loading directly from the source system to DSO, then delete the bad request in the PSA table, then restart the job
    4.      Failure in Drop Index/ Compression step: When there is a failure in Drop Index/ compression step, check the Error Message. If it is failed due to “Lock Issue”, it means job failed because of the parallel process or action which we have performed on that particular cube or object. Before restarting the job, make sure whether the object is unlocked or not. There is a chance of failure in Index step in case of TREX server issues. In such cases engage BASIS team and get the info reg TREX server and repeat/ Restart the job once the server is fixed. Compression Job may fail when there is any other job which is trying to load the data or accessing from the Cube. In such case job fails with the error message as “Locked by ......” Before restarting the job, make sure whether the object is unlocked or not.   5. Roll Up failure: “Roll Up” fails due to Contention Issue. When there is Master Data load is in progress, there is a chance of Roll up failure due to resource contention. In such case before restarting the job/ step, make sure whether the master data load is completed or not. Once the master data load finishes restart the job.   6. Change Run – Job finishes with error RSM 756       When there is a failure in the attribute change run due to Contention, we have to wait for the other job (Attribute change run) completion. Only one ACR can run in BW at a time. Once the other ACR job is completed, then we can restart/repeat the job. We can also run the ACR manually in case of nay failures. Go to RSA1-> Tool -> Apply Hierarchy/Change Run -> select the appropriate Request in the list for which we have to run ACR -> Execute. 7. Transformation In-active: In case of any changes in the production/moved to the production without saving properly or any modification done in the transformation without changing, in such cases there is a possibility of Load failure with the error message as “ Failure due to Transformation In active”. In such cases, we will have to activate the Transformation which is inactive. Go to RSA1 -> select the transformation -> Activate In case of no authorization for activating the transformation in production system, we can do it by using the Function Module - RSDG_TRFN_ACTIVATE Try the following steps to use the program "RSDG_TRFN_ACTIVATE” here you will need to enter certain details:Transformation ID: Transformation’s Tech Name (ID)Object Status: ACTType of Source: “Source Name”Source name: “Source Tech Name”Type of Target: Target NameTarget name: “Target Tech Name”
    Execute. The Transformation status will be turned into Active.
    Then we can restart the job. Job will be completed successfully.
         8. Process Chain Started from the yesterday’s failed step:
    In few instances, process chain starts from the step which was failed in the previous iteration instead of starting from the “Start” step.
    In such cases we will have to delete the previous day’s process chain log, to start the chain form the beginning (from Start variant).
    Go To ST13-> Select the Process Chain -> Log -> Delete.
    Or we can use Function Module for Process Chain Log Deletion: RSPROCESS_LOG_DELETE.
    Give the log id of the process chain, which we can get from the ST13 screen.
    Then we can restart the chain.
    Turning the Process Chain Status using Function Module:
    At times, when there is no progress in any of the process chains which is running for a long time without any progress, we will have to turn the status of the entire chain/Particular step by using the Function Module.
    Function Module: RSPC_PROCESS_FINISH
    The program "RSPC_PROCESS_FINISH" for making the status of a particular process as finished.
    To turn any DTP load which was running long, so please try the following steps to use the program "RSPC_PROCESS_FINISH" here you need to enter the following details:
    LOG ID: this id will be the id of the parent chain.
    CHAIN: here you will need to enter the chain name which has failed process.
    TYPE: Type of failed step can be found out by checking the table "RSPCPROCESSLOG" via "SE16" or "ZSE16" by entering the Variant & Instance of the failed step. The table "RSPCPROCESSLOG" can be used to find out various details regarding a particular process.
    INSTANCE & VARIANT: Instance & Variant name can be found out by right clicking on the failed step and then by checking the "Displaying Messages Options" of the failed step & then checking the chain tab.
    STATE: State is used to identify the overall state of the process. Below given are the various states for a step.
    R Ended with errors
    G Successfully completed
    F Completed
    A Active
    X Canceled
    P Planned
    S Skipped at restart
    Q Released
    Y Ready
    Undefined
    J Framework Error upon Completion (e.g. follow-on job missing)
    9. Hierarchy save Failure:
    When there a failure in Hierarchy Save, then we have to follow the below process...
    If there is an issue with Hierarchy save, we will have to schedule the Info packages associated with the Hierarchies manually. Then we have to run Attribute Change Run to update the changes to the associated Targets. Please find the below mentioned the step by step process...
    ST13-> Select Failed Process Chain -> Select Hierarchy Save Step -> Rt click Display Variant -> Select the info package in the hierarchy -> Go to RSA! -> Run the Info Package Manually -> Tools -> Run Hierarchy/Attribute Change Run -> Select Hierarchy List (Here you can find the List of Hierarchies) -> Execute.

  • Latency issue with NI-DAQ

    I am having a bit of an issue with the overhead that seems to be present with all of the (traditional) NI-DAQ routines that perform analog input. I am using a PCI-6014, which has a 250 kHz maximum conversion rate (4 microsecond), and I am finding a simple AI_Read takes 95 usec. I accept that some overhead is necessary, but taking 24x the sample time seems rediculous. More complex functions like SCAN_Setup and SCAN_Start (or DAQ_Setup, DAQ_Start) impose a 1.9 millisecond hiccup before they get going.
    Maybe I am just spoilt. For about 8 years I have been using register-level programming of a DAQCard-1200, a PCMCIA card that has a maximum conversion rate of 100 kHz (10 usec per sample) and having written to the appropriate register to start
    the conversion, the data is ready some 19 usec later. It just seems that a 6014 card that is 2.5x faster should not end up 5x slower when the only real difference is whether I go through NI-DAQ or write directly to the card.
    Any help appreciated.

    Hi Michael,
    I haven't done a lot of benchmarking on the various DAQ commands but I might be able to shed a bit of light on the lower level operation of some of these commands.
    When setting up your hardware for a given task (SCAN_Setup, SCAN_Start) these commands call the driver and the driver does have a bit of overhead with these types of calls. As a result, using these commands to "configure" the onboard registers will take the given amount of time.
    Since the PCI-6014 has a DMA channel (also works with interrupts anyhow), the card will automatically transfer data to PC memory. All the AI-Read is doing is copying data from the PC memory where the DAQ board is transferring data to, to the LabVIEW or Application data memory buffer. This command is quite dependent on the state of the buffer and it is a blocking call (holds the driver until it gets what it was called for). As an example, if you told it to read 1000 samples (Scans to Read parameter) and your PC memory buffer only has 10 samples, this command will wait until it gets all the data points and then transfers.
    What you can do to speed your acquisition call (AI_Read) is to use the Scan Backlog parameter to monitor how much data is in the buffer and on the next iteration of the loop, only read that amount of data. This means that the AI Read will not have to wait for data to fill the buffer. It will already be present. This command is essentially a copy command. Copying memory will invariably take longer than writing to a register. Even if you are copying a data buffer in C code from one buffer to another buffer (in PC memory) it will still have a descent amount of overhead vs what we would expect.
    Where NI has improved their driver is in the transfer of data from the onboard FIFO of the hardware to the PC memory. This is the real rate-determining step in data acquisition. I however see your point if you are trying to reconfigure the board quickly or in a loop and now it takes longer. These delays can add up.
    Bottom line, setting up and configuring your DAQ board will be quicker if you use register level programming. However, controlling the transfer of data from the DAQ card and PC memory is less efficient (in general unless you have optimized the transfer algorithms using register level programming).
    Ron
    Applications Engineer
    National Instruments

Maybe you are looking for

  • How do I install 10.9.1 without 10.9

    I have had to reinstall OSX from the discs that came with my MacBook Pro, which are 10.6. I then restored my backups from time machine and now need to upgrade to 10.9 so everything will run. However Mavericks is now at 10.9.1 and will not install wit

  • How to install a new hard drive

    I can't find any instructions in my manual or by searching the forums. Is it easy enough to swap out a dead HD with a new one or do I need to leave this to the pros? It's certainly easy enough on my G4 tower but I know the iBook will be different. th

  • Syncing bookmarks in iPhone5 - solved

    I had a hard time with this, talked to 3 apple support people with no success. The iPhone safari had picked up a HUGE list of bookmarks and I wanted to cut it down to about 10 for the iPhone. Simplify, you know? One apple guy told me you have to take

  • Client Payment Overdue: Take Site Down?

    Greetings, I have consolidated billing turned on. One of my clients is overdue on his annual BC payment. Unfortunately, this was automatically charged to me.  Since he is taking his sweet time paying up, I need to put the squeeze on.  What is the bes

  • How and put data frrom DB into JFrame.tables ?

    hello, I tried to get rows from a DB into a table. Therefore I created a class(A) with a function(a1) throws SQLException. Connect and select rows from the DB runs perfectly. Now I want to put this data into a table. I created another class(B) extend