Prob update cfindex with large query

Hi all,
I know i am using an old version standard MX 6.1 but ...
I work on windows 2000 server.
I have created 2 collections.
Then i have updated one of them with a query : 3600 records
That works fine.
Then i have updated the second with a query also but 15600
records.
<cfindex> return no err but nothing in the collection.
All collections less than 125000 docs.
Yesterday, this update may be with 15500 records have worked
fine.
There is no problem disk space.
Any one have an idea?
Thanks.
Didier

This has been solved.
In fact update works fine.
But i did not know the limit of cfsearch at 64K.
And as it returns no err but empty result ...
Thanks

Similar Messages

  • Is it possible to update a query with another query?

    I'm trying to update a query with another query (see attached
    code). Here's my setup: I've got a table in an Access database in
    which I enter a string into a form and update. This string
    corresponds to a single record in another table of the same
    datasource. The first table has only one record to provide the
    second, which has many and will have more. Basically what I'm
    wondering is: Is this a valid thing to do in coldfusion? If not
    please help with an alterate method. I'm still a novice at
    coldfusion.
    The overall effect I'm going for is to display the one record
    as a featured truck profile on the web site:
    www.truckerstoystore.net.
    I currently get an error when I try to display the page with the
    current query setup.
    Check this page to see the error:
    www.truckerstoystore.net/currentTOW2.cfm
    Help on this issue is very much appreciated.
    ------------------------------------------------------------------------------------------ -----------------------------------------------------------------------

    I think this is what you are after
    <!--- this query will get all the records from the DB
    --->
    <cfquery name="cTOW" datasource="tow">
    SELECT *
    FROM currentTOW
    <!--- Do you need to find a particular record in the
    database --->
    <!--- If so, then you need a 'where' clause in here
    --->
    </cfquery>
    <!-- Loop the cTOW query for each record returned -->
    <cfloop query="cTOW">
    <!--- For the record returned from the cTOW query you now
    need to update the table --->
    <!-- Update the table -->
    <cfquery name="currentTOW" datasource="tow">
    UPDATE Your tblName
    SET
    Dataname = cTOW.DataValue
    </cfquery>
    </cfloop>
    thats it
    PS: I think your original query needs modifying. To return
    the exact records that you want to update from the original table.
    ie: Primary and foreign key relationship

  • SQL Query issue with large varchar column

    I have a SQL Query (PL/SQL function body returning SQL query) which contains a large varchar column. Even if I substring the column to 30chars when it displays on the page it wraps. I have tried putting nowrap="wrap" for the HTML table cell attributes and it still wraps. I have tried setting the width attributes on the column even though it's not an updateable column. Does anyone have any ideas on how prevent this from wrapping. In some cases 1 line will take up 3 because of this wrapping issue and it's not nice to look at. It seems that the column is somewhere set to a fixed width, which is less than 30 characters, and anything beyond this fixed width wraps.

    Hi Netha,
    Can you please provide the DDLs of three tables you are using,
    Also post us how many rows you are getting output for this query? 
    select * from dim.store st where
    st.store_code = 'MAUR'
    also try to run and update statement on this table as below and execute your query
    update dim.store
    set store_code
    = ltrim(rtrim(store_code))
    where
    store_code = 'MAUR'
    once you run this update, then run your query.  Let us know the result.

  • Portal time out with large jdbc query in JSP bean

    Hi,
    My JSP portlet always times out while waiting to complete two large JDBC queries. The error shown in my jserv.log file is:
    [28/03/2002 12:41:51:221 GMT+08:00] page/Fetching timed out for an Unknown Reason. Killing fetcher name=content-fetcher2 label=174 url=http://mephistopheles.au.oracle.com/servlet/agcharts time=120633
    [28/03/2002 12:43:10:595 GMT+08:00] page/UncaughtException in thread name=content-fetcher2, starting a new fetcher after exception
    java.lang.ThreadDeath
    Is there some particular settings I can modify to prevent the content fetcher from timing out? Cheers

    Thanks. The charts portlet now can be delayed up to the required ~10mins before returning data from the large query. This works in conjunction with changes in the directives/parameters for Jserv and httpd(Note:180548.1)
    Shankar,
    Normally, you can increase the time out in the portlet tag of the provider.xml file. The timeout is in seconds, you should also increase the provider timeout. This is done in the provider registration screen.
    <timeout>60</timeout>
    <timeoutMessage>My Portlet Timed Out</timeoutMessage>
    Please note that if you set the timeout too high and your portlet is really not coming up, you page will wait the 60, 90, 120, etc seconds for the portlet to timeout, so be careful will portlet and provider timeouts.
    Sue

  • Updating a table with a query that return multiple values

    Hi,
    I'm trying to update a table which contain these fields : ItemID, InventoryID, total amounts
    with a query that return these values itemId, inventoryid and total amounts for each items
    Mind you, not all the rows in the table need to be updated. only a few.
    This what i wrote but doesn't work since the query return multiple values so i can't assign it to journalAmounts.
    UPDATE [bmssa].[etshortagetemp]
    SET JournalAmounts = (SELECT sum(b.BomQty) FROM [bmssa].[Bom] b
    JOIN [bmssa].[SalesLine] sl ON sl.ItemBomId = b.BomId
    JOIN [bmssa].[SalesTable] st ON st.SalesId = sl.SalesId
    WHERE st.SalesType = 0 AND (st.SalesStatus IN (0,1,8,12,13)) AND st.DataAreaId = 'sdi'
    GROUP BY b.itemid, b.inventdimid)
    Any advise how to do this task?

    Remember that link to the documentation posted above that explains exactly how to do this. When you read it which part exactly were you having trouble with?

  • Update several rows with single query (easy question, I guess)

    Hi all!
    I have table with two columns - name and value.
    I populate it with several sql queries:
    insert into settings (name, value) values ('company_name', 'My Company');
    insert into settings (name, value) values ('company_address', 'Company Address 12');
    insert into settings (name, value) values ('company_city', 'South Park');
    How can update rows with company_name and company_city in single query?
    Thank you in advance!

    How can update rows with company_name and company_city in single query?I guess something like this was meant:
    update settings set value = ??? where name in ('company_name ', 'company_city');But it's still unclear to me what should be used instead of question marks... :)
    Regards.

  • Performance issue with insert query !

    Hi ,
    I am using dbxml-2.4.16, my node-storage container is loaded with a large document ( 54MB xml ).
    My document basically contains around 65k records in the same table ( 65k child nodes for one parent node ). I need to insert more records in to my DB, my insert XQuery is consuming a lot of time ( ~23 sec ) to insert one entry through command-line and around 50sec through code.
    My container is indexed with "node-attribute-equality-string". The insert query I used:
    insert nodes <NS:sampleEntry mySSIAddress='70011' modifier = 'create'><NS:sampleIPZone1Address>AABBCCDD</NS:sampleIPZone1Address><NS:myICMPFlag>1</NS:myICMPFlag><NS:myIngressFilter>1</NS:myIngressFilter><NS:myReadyTimer>4</NS:myReadyTimer><NS:myAPNNetworkID>ggsntest</NS:myAPNNetworkID><NS:myVPLMNFlag>2</NS:myVPLMNFlag><NS:myDAC>100</NS:myDAC><NS:myBcastLLIFlag>2</NS:myBcastLLIFlag><NS:sampleIPZone2Address>00000000</NS:sampleIPZone2Address><NS:sampleIPZone3Address>00000000</NS:sampleIPZone3Address><NS:sampleIPZone4Address>00000000</NS:sampleIPZone4Address><NS:sampleIPZone5Address>00000000</NS:sampleIPZone5Address><NS:sampleIPZone6Address>00000000</NS:sampleIPZone6Address><NS:sampleIPZone7Address>00000000</NS:sampleIPZone7Address></NS:sampleEntry> into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable)
    If I modify my query with
    into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:sampleTable/NS:sampleEntry[@mySSIAddress='1']
    insted of
    into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable)
    Time taken reduces only by 8 secs.
    I have also tried to use insert "after", "before", "as first", "as last" , but there is no difference in performance.
    Is anything wrong with my query, what should be the expected time to insert one record in a DB of 65k records.
    Has anybody got any idea regarding this performance issue.
    Kindly help me out.
    Thanks,
    Kapil.

    Hi George,
    Thanks for your reply.
    Here is the info you requested,
    dbxml> listIndexes
    Index: unique-node-metadata-equality-string for node {http://www.sleepycat.com/2002/dbxml}:name
    Index: node-attribute-equality-string for node {}:mySSIAddress
    2 indexes found.
    dbxml> info
    Version: Oracle: Berkeley DB XML 2.4.16: (October 21, 2008)
    Berkeley DB 4.6.21: (September 27, 2007)
    Default container name: n_b_i_f_c_a_z.dbxml
    Type of default container: NodeContainer
    Index Nodes: on
    Shell and XmlManager state:
    Not transactional
    Verbose: on
    Query context state: LiveValues,Eager
    The insery query with update takes ~32 sec ( shown below )
    time query "declare namespace foo='MY-SAMPLE';declare namespace NS='NS';insert nodes <NS:sampleEntry mySSIAddress='70000' modifier = 'create' ><NS:sampleIPZone1Address>AABBCCDD</NS:sampleIPZone1Address><NS:myICMPFlag>1</NS:myICMPFlag><NS:myIngressFilter>1</NS:myIngressFilter><NS:myReadyTimer>4</NS:myReadyTimer><NS:myAPNNetworkID>ggsntest</NS:myAPNNetworkID><NS:myVPLMNFlag>2</NS:myVPLMNFlag><NS:myDAC>100</NS:myDAC><NS:myBcastLLIFlag>2</NS:myBcastLLIFlag><NS:sampleIPZone2Address>00000000</NS:sampleIPZone2Address><NS:sampleIPZone3Address>00000000</NS:sampleIPZone3Address><NS:sampleIPZone4Address>00000000</NS:sampleIPZone4Address><NS:sampleIPZone5Address>00000000</NS:sampleIPZone5Address><NS:sampleIPZone6Address>00000000</NS:sampleIPZone6Address><NS:sampleIPZone7Address>00000000</NS:sampleIPZone7Address></NS:sampleEntry> into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable"
    Time in seconds for command 'query': 32.5002
    and the query without the updation part takes ~14 sec ( shown below )
    time query "declare namespace foo='MY-SAMPLE';declare namespace NS='NS'; doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable"
    Time in seconds for command 'query': 13.7289
    The query :
    time query "declare namespace foo='MY-SAMPLE';declare namespace NS='NS'; doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//PMB:sampleTable/PMB:sampleEntry[@mySSIAddress='1000']"
    Time in seconds for command 'query': 0.005375
    is very fast.
    The Updation of the document seems to consume much of the time.
    Regards,
    Kapil.

  • Updating table with more than 50 million records

    Hi Team,
    Updating three columns in a table which has 60 millions of records is taking a lot of time. How to improve the performance of the query. The table has to be updated based on the values from different database tables.
    Thanks,
    Eshwar.
    Please don't forget to Marked as Answer if my post solved your problem and use Vote As Helpful if a post was useful. It will helpful to other users.

    Split them into small batches
    when you are using multiple tables, have them with JOINs
    have the corresponding indexes for joining columns
    disable triggers if any (if possible and doesnt affect your logic - wud be least favourable step ..)
    also these shud help:
    http://blogs.msdn.com/b/repltalk/archive/2011/10/10/lessons-learned-updating-100-millions-rows.aspx
    http://www.sqlservergeeks.com/blogs/AhmadOsama/personal/450/sql-server-optimizing-update-queries-for-large-data-volumes
    http://stackoverflow.com/questions/7344984/update-large-number-of-rows-sql-server-2005
    Thanks,
    Jay
    <If the post was helpful mark as 'Helpful' and if the post answered your query, mark as 'Answered'>

  • Business Partner records with large numbers of addresses -- Move-in issue

    Friends,
    Our recent CCS implementation (ECC6.0ehp3 & CRM2007) included the creation of some Business Partner records with large numbers of addresses.  Most of these are associated with housing authorities, large developers and large apartment complex owners.  Some of the Business Partners have over 1000 address records and one particular BP has over 6000 addresses that were migrated from our Legacy System.  We are experiencing very long run times to try to execute move in's and move out's due to the system reading the volume of addresses attached to the Business Partner.  In many cases, the system simply times out before it can execute the transaction.  SAP's suggestion is that we run a BAPI to cleanse the addresses and also to implement a BADI to prevent the creation of excess addresses. 
    Two questions surrounding the implementation of this code.  Will the BAPI to cleanse the addresses, wipe out all address records except for the standard address?  That presents an issue to ensure that the standard address on the BP record is the correct address that we will have identified as the proper mailing address.  Second question is around the BADI to prevent the creation of excess addresses.  It looks like this BADI is going to prevent the move in address from updating the standard address on the BP record which in the vast majority of cases is exactly what we would want. 
    Does anyone have any experience with this situation of excess BP addresses and how did you handle the manipulation and cleansing of the data and how do you maintain it going forward?
    Our solution is ECC6.0Ehp3 with CRM2007...latest patch level
    Specifically, SAP suggested we apply/review these notes:
    Note 1249787 - Performance problem during move-in with huge addresses
    **applied this ....did not help
    Note 861528 - Performance in move-in for partner w/ large no of addresses
    **older ISU4.7 note
    Directly from our SAP message:
    use the function module
    BAPI_BUPA_ADDRESS_REMOVE or run BAPI_ISUPARTNER_CHANGE to delete
    unnecessary business partner addresses.
    Use BAdI ISU_MOVEIN_CUSTOMIZE to avoid the creation of unnecessary
    business partner addresses (cf. note 706686) in the future for that
    business partner.
    Note 706686 - Move-in: Avoid unnecessary business partner addresses
    Does anyone have any suggestions and have you used above notes/FMs to resolve something like this?
    Thanks,
    Nick

    Nick:
    One thing to understand is that the badi and bapi are just the tools or mechanisms that will enable you to fix this situation.  You or your development team will need to define the rules under which these tools are used.  Lets take them one at a time.
    BAPI - the bapi for business partner address maintenance.  It would seem that you need to create a program which first read the partners and the addresses assigned to them and then compares these addresses to each other to find duplicate addresses.  These duplicates then can be removed provided they are not used elsewhere in the system (i.e. contract account).
    BADI - the badi for business partner address maintenance.  Here you would need to identify the particular scenarios where addresses should not be copied.  I would expect that most move-ins would meet the criteria of adding the address and changing the standard address.  But for some, i.e. landlords or housing complexes, you might not add an address because it already exists for the business partner, and you might not change the standard address because those accounts do not fall under that scenario.  This will take some thinking and design to ensure that the address add/change functions are executed under the right circumstances.
    regards,
    bill.

  • In OSB , xquery issue with large volume data

    Hi ,
    I am facing one problem in xquery transformation in OSB.
    There is one xquery transformation where I am comparing all the records and if there are similar records i am clubbing them under same first node.
    Here i am reading the input file from the ftp process. This is perfectly working for the small size input data. When there is large input data then also its working , but its taking huge amount of time and the file is moving to error directory and i see the duplicate records created for the same input data. I am not seeing anything in the error log or normal log related to this file.
    How to check what is exactly causing the issue here,  why it is moving to error directory and why i am getting duplicate data for large input( approx 1GB).
    My Xquery is something like below.
    <InputParameters>
                    for $choice in $inputParameters1/choice              
                     let $withSamePrimaryID := ($inputParameters1/choice[PRIMARYID eq $choice/PRIMARYID])                
                     let $withSamePrimaryID8 := ($inputParameters1/choice[FIRSTNAME eq $choice/FIRSTNAME])
                     return
                      <choice>
                     if(data($withSamePrimaryID[1]/ClaimID) = data($withSamePrimaryID8[1]/ClaimID)) then
                     let $claimID:= $withSamePrimaryID[1]/ClaimID
                     return
                     <ClaimID>{$claimID}</ClaimID>                
                     else
                     <ClaimID>{ data($choice/ClaimID) }</ClaimID>

    HI ,
    I understand your use case is
    a) read the file ( from ftp location.. txt file hopefully)
    b) process the file ( your x query .. although will not get into details)
    c) what to do with the file ( send it backend system via Business Service?)
    Also noted the files with large size take long time to be processed . This depends on the memory/heap assigned to your JVM.
    Can say that is expected behaviour.
    the other point of file being moved to error dir etc - this could be the error handler doing the job ( if you one)
    if no error handlers - look at the timeout and error condition scenarios on your service.
    HTH

  • Reader Updates conflicting with acrobat

    I have to deploy these updates on a large scale, but when deployed clients that have Acrobat installed have problems with their .fdf forms, they are blanked out and will not populate the data required.  This can be fixed manually by selecting the default .pdf handler in Reader preferences but it basically forces a rebuild of reader.  I cannot have a manual fix for a large number of machines.  Anyone have any suggestions?

    Ok, I have not been clear enough, thanks for your patience.
    I can open AA6 but when I try to open a document I get a message stating, “ Adobe Acrobat has encountered a problem and needs to close. We are sorry for the inconvenience.”  There is a link I can select to view more information. The error signature reads
    Error signature
    AppName: acrobat.exe                 AppVer: 6.0.5.399   ModName: unknown
    ModVer: 0.0.0.0               Offset: 00060000
    I then can select to view the error report. I cannot seem to highlight to copy and paste the error report which is a pretty long document.
    Hope this helps
    Thank you very much!
    Brad Ashworth AIA, LEED AP
    SLA Architects
    824 South 400 West Suite B103
    SLC UT 84101
    ph:801.322.5550
    cell: 801.864.4054
    please consider the impact to the environment
    before printing this e-mail

  • Oracle RDF / Joseki : issue with large literals

    Hi,
    I have been using Joseki to query an Oracle RDF model. There seems to be an issue with large literals (according to a few unreliable tests, I would say this concerns literals around and over 4000 chars).
    Here are the two potential behaviours :
    First case:
    If the results contains several lines, one of which contains an overly large literal, there are NO exception on the server side, but the resulting xml is incomplete.
    It misses the "line" containing the large literal, and the xml is stopped there (which means that it also misses the closing </results> and </sparql>. In my case, I am using the results through Jena's sparqlService, which means I get this message :
    XMLStreamException: Unexpected EOF; was expecting a close tag for element <results>
    +at [row,col {unknown-source}]: [31,0]+
    Second case:
    If the query only returns one line which contains an overly large literal, the client receives a simple *"HttpException: 500 Internal Server Error"*
    Here is the error message from my server :
    +INFO [[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'] (SPARQL.java:165) - Throwable: we+
    blogic.jdbc.wrapper.Clob_oracle_sql_CLOB cannot be cast to oracle.sql.CLOB
    java.lang.ClassCastException: weblogic.jdbc.wrapper.Clob_oracle_sql_CLOB cannot be cast to oracle.sql.CLOB
    at oracle.spatial.rdf.client.jena.OracleSemIterator.getNodesFromResultSet(OracleSemIterator.java:579)
    at oracle.spatial.rdf.client.jena.OracleSemIterator.next(OracleSemIterator.java:445)
    at oracle.spatial.rdf.client.jena.OracleLeanQueryIter.moveToNextBinding(OracleLeanQueryIter.java:135)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.nextBinding(QueryIteratorBase.java:98)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIterConvert.moveToNextBinding(QueryIterConvert.java:56)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.nextBinding(QueryIteratorBase.java:98)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIterRepeatApply.moveToNextBinding(QueryIterRepeatApply.java:76)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.nextBinding(QueryIteratorBase.java:98)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:54
    +)+
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:69)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIterConvert.hasNextBinding(QueryIterConvert.java:50)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:69)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorWrapper.hasNextBinding(QueryIteratorWrapper.java:30)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:69)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorWrapper.hasNextBinding(QueryIteratorWrapper.java:30)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:69)
    at com.hp.hpl.jena.sparql.engine.ResultSetStream.hasNext(ResultSetStream.java:62)
    at org.joseki.processors.SPARQL.executeQuery(SPARQL.java:309)
    at org.joseki.processors.SPARQL.execQueryWorker(SPARQL.java:288)
    at org.joseki.processors.SPARQL.execQueryProtected(SPARQL.java:126)
    at org.joseki.processors.SPARQL.execOperation(SPARQL.java:120)
    at org.joseki.processors.ProcessorBase.exec(ProcessorBase.java:112)
    at org.joseki.ServiceRequest.exec(ServiceRequest.java:36)
    at org.joseki.Dispatcher.dispatch(Dispatcher.java:59)
    at org.joseki.http.Servlet.doCommon(Servlet.java:177)
    at org.joseki.http.Servlet.doGet(Servlet.java:138)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
    at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
    at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
    at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
    at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    Would there be any fix / workaround ?
    Please let me know if you need further information / tests.
    Thanks,
    Regards,
    Julien

    Thanks for your reply.
    While trying to build up a small test case, I found out why there were discrepancies between the two cases I described.
    Indeed, usually, the two cases return the same thing (no exception on the server side, but incomplete resulting XML).
    The exception I described happened when I tried something else. Since I saw that issues were coming from long literals, I used fn:string-length (ARQ) to figure out how long they were.
    The test case resulting in the CLOB-cast-exception is:
    - too large literal
    - only one result "line" containing this literal
    - usage of fn:string-length (which does not change the behaviour in other cases (i.e. no long literals or/and several lines).
    Anyway, you will receive the other test cases shortly.
    Thanks,
    Regards,
    Julien

  • How to use "with clause query" in DBadapter

    Hi all,
    I need to implement a "with clause" query in oracle soa 11g bpel. When i put the query in db adapter in pure sql, the schema is not getting generated properly. Can any one suggest a solution to my problem.
    Regards,
    Kaushik

    Pure SQL won't work because it is expecting the first word in the SQL to be SELECT (or INSERT,UPDATE,DELETE).
    If your query is WITH ... SELECT ...
    try this:
    delete everything before SELECT. Copy and paste the generated XSD to another window. The SQL test may fail, but that will just mean that it couldn't fill in the types of the columns in the SELECT ... FROM list. You can always do that yourself by hand editing the XSD (including in the wizard before you hit next). Then put back the WITH ... clause before the remaining SELECT .... If the XSD gets overwritten, copy the version you saved in the other window and paste it over top. Then hit next and the runtime should still work.
    Keep in mind that SQL is very complex and hard to fully parse in the UI. However the minimum information the DbAdapter needs is quite limited; basically just the name and number of columns that are coming back. The XSD is meant to be editable in the wizard if the SQL is too complex.
    Thanks
    Steve

  • Running out of memory while using cursored stream with large data

    We are following the suggestions/recommendations for the cursored stream:
    CursoredStream cursor = null;
              try
                   Session session = getTransaction();
                   int batchSize = 50;
                   ReadAllQuery raq = getQuery();
                   raq.useCursoredStream(batchSize, batchSize);
                   int num = 0;
                   ArrayList<Request> limitRequests = null;
                   int totalLimitRequest = 0;
                   cursor = (CursoredStream) session.executeQuery(raq);
                   while( !cursor.atEnd() )
                        Request request = (Request) cursor.read() ;
                        if( num == 0 )
                             limitRequests = new ArrayList<Request>(batchSize);
                        limitRequests.add(request);
                        totalLimitRequest++;
                        num++;
                        if( num >= batchSize )
                             log.warn("Migrating batch of " + batchSize + " Requests.");
                             updateLimitRequestFillPriceForBatch(limitRequests);
                             num = 0;
                             cursor.releasePrevious();
                   if( num > 0 )
                        updateLimitRequestFillPriceForBatch(limitRequests);
                   cursor.close();
    We are committing every 50 records in the unit of work, if we set DontMaintianCache on the ReadAllQuery we are getting PrimaryKeyExceptions intermittently, and we do not see much difference in the IdentityMap size.
    Any suggestions/ideas for dealing with large data sets? Thanks

    Hi,
    If I use read-only classes with CursoredStream and execute the query within UOW, should I be saving any memory?
    I had to use UOW because when I use Session to execute the query I get
    6115: ISOLATED_QUERY_EXECUTED_ON_SERVER_SESSION
    Cause: An isolated query was executed on a server session: queries on isolated classes, or queries set to use exclusive connections, must not be executed on a ServerSession or in CMP outside of a transaction.
    I assume marking the descriptor as read-only will avoid registering in UOW, but I want to make sure that this is the case while using CursoredStream.
    We are running in OC4J(OAS10.1.3.4) with BeanManagedTransaction.
    Please suggest.
    Thanks
    -Raam
    Edited by: Raam on Apr 2, 2009 1:45 PM

  • Help me to update table with condition's

    this table is an alert table which will update when the sql server down , not pinging and drive space low.
    Every 15 mins the monitoring system run. if the any issue came then it will update the information in this table. if  the issue not solved by 15 mins the table will update again with the same details.. 
    I would like update tickeraised = Y  only on first time and  if i got same issue less then 30 min the it should not change to Y..  based on server name , type and message. 
    min >10 and <20 min if any value is there then the table should not update with same value. can any one help me with tsql query...

    In future please post DDL and DML. For now I have created a scenario which will help you understand solution to your own requirement.
    CREATE TABLE Tickets_Log(
    Ticked_ID SMALLINT IDENTITY(1,1) PRIMARY KEY,
    Ticket_Type VARCHAR(20) NOT NULL,
    Log_Date DATETIME2 NOT NULL DEFAULT DATEADD(MINUTE,-15,GETDATE()),
    Machine_Name VARCHAR(50) NOT NULL,
    Message VARCHAR(100) NOT NULL,
    Ticket_Status CHAR(2) NOT NULL DEFAULT 'N',
    Update_Status SMALLINT DEFAULT 0)
    INSERT Tickets_Log(Ticket_Type,Machine_Name,Message)
    SELECT 'Pinging','HOD-400-651','Server Not Pinging' UNION
    SELECT 'Low Drive Space','HOD-400-652','Drive Space Low' UNION
    SELECT 'Connection','HOD-400-653','Unable to Connect to Server'
    UPDATE TL
    SET Log_Date=NewTickets.Log_Date,
    Update_Status=1
    FROM( SELECT 'Pinging' Ticket_Type,'HOD-400-651' Machine_Name,'Server Not Pinging' Message,GETDATE() Log_Date UNION
    SELECT 'Pinging','HOD-400-653','Server Not Pinging',GETDATE() Log_Date) NewTickets
    LEFT JOIN Tickets_Log TL ON NewTickets.Machine_Name=TL.Machine_Name AND NewTickets.Ticket_Type=TL.Ticket_Type
    WHERE TL.Ticket_Type IS NOT NULL AND TL.Machine_Name IS NOT NULL AND DATEDIFF(MINUTE,TL.Log_Date,NewTickets.Log_Date)>=15 AND Update_Status=0
    INSERT Tickets_Log(Ticket_Type,Machine_Name,Message,Log_Date)
    SELECT NewTickets.*
    FROM( SELECT 'Pinging' Ticket_Type,'HOD-400-651' Machine_Name,'Server Not Pinging' Message,GETDATE() Log_Date UNION
    SELECT 'Pinging','HOD-400-653','Server Not Pinging',GETDATE() Log_Date) NewTickets
    LEFT JOIN Tickets_Log TL ON NewTickets.Machine_Name=TL.Machine_Name AND NewTickets.Ticket_Type=TL.Ticket_Type
    WHERE TL.Ticket_Type IS NULL AND TL.Machine_Name IS NULL
    Chaos isn’t a pit. Chaos is a ladder. Many who try to climb it fail and never get to try again. The fall breaks them. And some are given a chance to climb, but they refuse. They cling to the realm, or the gods, or love. Illusions. Only the ladder is real.
    The climb is all there is.

Maybe you are looking for

  • Change request management in solution manger release 7.1 or 7.0

    Hi All, Can any body please explain me how to configure change request management in solman release7.1 and if it is possible please send document to my below maild Id. And please let me know what are the steps needs to be done satellite systems, we h

  • Using an OLE container to store MS word files

    Hello, We are using OLE containers in Forms 6 to store word documents. Occasionally documents seem to be coruppted and can not be opened. I exported document back to a file useing TOAD but the file seems to have a bunch of extra header info added by

  • Can't delete Software Component in Integration Repository

    Created Software Component,technical system, business system in SLD.  Imported SC into IR (didn't create any scenarios/Dt/Mt/mi/etc yet).  Want to delete the Software Component and remove it from Integration Repository.  Was able to delete in SC SLD,

  • MY IPHOTO HAS LOCKED ITSELF HELP!!, MY IPHOTO HAS LOCKED ITSELF HELP!!

    MY IPHOTO HAS LOCKED ITSELF HELP!!, MY IPHOTO HAS LOCKED ITSELF HELP!! how do i unlock it please!!

  • Http - retrieve text

    Hi, i'm doing some experiments with utl_http pkg. Actually, i need to retrieve the title of a web page. Using utl_http.get_response() i can retrieve all the response but i need only the title. How can i do that? Thanks in advance, Acr