Problem in matrix and Date

Hi All
I have designed a form with a textbox and a matrix.
I have set CFL for the textbox
If select record I have made it to display in the matrix
at that time a error appears Invalid Date Value
-Anto

Hai Sridhar
Here is my code
Multi selection in CFL
Try
                        Dim CFLArr(50, 8) As String
                        For i As Integer = 1 To oDataTable.Rows.Count
                            CFLArr(i - 1, 0) = oCFLEvento.SelectedObjects.GetValue("U_Empid", i - 1)
                            CFLArr(i - 1, 1) = oCFLEvento.SelectedObjects.GetValue("U_Empname", i - 1)
                            CFLArr(i - 1, 2) = oCFLEvento.SelectedObjects.GetValue("U_dept", i - 1)
                            CFLArr(i - 1, 3) = oCFLEvento.SelectedObjects.GetValue("U_DOJ", i - 1)
                            CFLArr(i - 1, 4) = oCFLEvento.SelectedObjects.GetValue("U_exp", i - 1)
                            CFLArr(i - 1, 5) = oCFLEvento.SelectedObjects.GetValue("U_CTC", i - 1)
                        Next
                        oMatrix = oForm.Items.Item("matrix").Specific
                        For i As Integer = 1 To oDataTable.Rows.Count
                            oMatrix.AddRow()
                            oMatrix.Columns.Item("V_5").Cells.Item(oMatrix.RowCount).Specific.Value = CFLArr(i - 1, 0)
                            oMatrix.Columns.Item("V_4").Cells.Item(oMatrix.RowCount).Specific.Value = CFLArr(i - 1, 1)
                            oMatrix.Columns.Item("V_3").Cells.Item(oMatrix.RowCount).Specific.Value = CFLArr(i - 1, 2)
                            oMatrix.Columns.Item("V_2").Cells.Item(oMatrix.RowCount).Specific.Value = CFLArr(i - 1, 3)
                            oMatrix.Columns.Item("V_1").Cells.Item(oMatrix.RowCount).Specific.Value = CFLArr(i - 1, 4)
                            oMatrix.Columns.Item("V_0").Cells.Item(oMatrix.RowCount).Specific.Value = CFLArr(i - 1, 5)
                        Next
                    Catch ex As Exception
                        SBO_Application.MessageBox("ErrCfl:" & ex.Message)
                    End Try
-Anto

Similar Messages

  • Basic Problems with Attributes and Data Loading

    Hi Specialists,
    I have a basic understanding Problem. I will explain it with a simple Example. I have to load Employee-Data from a file. For each Employee we have
    EmployeeNo
    EmployeeName
    EmployeeLanguage
    My Modelling:
    1 InfoObject Employee, with EmployeeNo as Key and EmployeeName as Text. For the EmployeeLanguage I create an additional InfoObject, in order to have an Attribute EmployeeLanguage at the InfoObject Employee. So far so good (I think).
    What kind of DataSources have I to create?? It is necessary to load the possible Languages first?? If yes, what kind of DS I have to choose, for loading only "English", "German", "Dutch". Attribute? Text?
    If not, the loading process is not possible, because one language exists more than one time in my flat file.
    Hope you can help me to see it clear.
    Thanks.
    Denise

    Hi Denise,
    I don't think you have to load all possible languages first. This is only necessary if you also want to load texts for EmployeeLanguage (example language german with texts "german", "deutsch", ...).
    Specify Employee as data target (in tab Master Data / Texts) and assign your InfoArea. There you will find your targets for texts and attributes, now create InfoSource for your file and assign update rules for both texts and attributes.
    Best regards,
    Björn

  • Big problems with Matrix and Screen Painter

    Hi all !!!
    I have a terrible time with my matrix in screen painter. I looked over all the samples in SDK and tried different things. The closest i got to make it work is when my form loads, i can click in each on my cells on the first row of the matrix but when i enter data if i click on another cell i lose the data. My form is fairly simple. I got 4 fields as regular data and a matrix with 8 columns. Can someone help me with this !!!  Maybe one of you have an complete example for entering manually data in cells and then save it. I'm using a UDT (not UDO) table and using .net ...
    Thanks in advance

    Hi Alain
    It's not necaserry to delete the matrix. Leave it there. Then access your matrix with the following code
    Dim oMatrix As SAPbouiCOM.Matrix
    Dim oColumn As SAPbouiCOM.Column
    Dim oColumns As SAPbouiCOM.Columns
    oMatrix = SBO_Application.Forms.Item("FormID").Items.Item("MatrixID").Specific
    oColumns = oMatrix.Columns
    oMatrix.Clear()
    then create datasource and bind
    oForm.DataSources.UserDataSources.Add("DataID", SAPbouiCOM.BoDataType.dt_SHORT_TEXT, 20)
    oColumn = oColumns.Item("ColID")
    oColumn.DataBind.SetBound(True, "", "DataID")
    Hope this helps

  • Problem with CASE  and DATEs in sql

    Hi to everybody
    I have this table
    SQL> desc borrar
    Name Null? Type
    NUMERO FLOAT(126)
    NUMERO2 NUMBER(2)
    FECHA1 DATE
    NUMEROREAL NUMBER(10,3)
    CADENA VARCHAR2(100)
    CADENAN NVARCHAR2(10)
    and when I execute the next sentence appear
    SQL> select 1,
    case when sysdate <= FECHA1 then
    fecha1
    else
    to_date('02012000','ddmmyyyy')
    end reconDia
    from borrar;
    to_date('02012000','ddmmyyyy')
    ERROR at line 4:
    ORA-00932: inconsistent datatypes
    but If I write ...
    select 1,
    case when sysdate <= FECHA1 then
    to_date(to_char(fecha1,'ddmmyyyy'), 'ddmmyyyy')
    else
    to_date('02012000','ddmmyyyy')
    end reconDia
    from borrar
    is ok ...
    does somebody know if it is a bug or Im doing something wrong ?
    Thank you in advance
    Hernan

    The query works correctly in 9i, but apparently you are running on 8i. Adding a CAST statement in 8i is another way to get the query to work:
    sql>select 1,
      2         case when sysdate <= fecha1 then fecha1
      3         else to_date('02012000','ddmmyyyy')
      4         end recondia
      5    from borrar;
           else to_date('02012000','ddmmyyyy')
    ERROR at line 3:
    ORA-00932: inconsistent datatypes
    sql>select 1,
      2         case when sysdate <= fecha1 then fecha1
      3         else cast(to_date('02012000','ddmmyyyy') as date)
      4         end recondia
      5    from borrar;
            1 RECONDIA
            1 07/30/2003 09:44:14am
            1 01/02/2000 12:00:00am
            1 01/02/2000 12:00:00am
    3 rows selected.

  • Problem during export and data

    I have 60GB of mount point OS level but my estimated dumpfile size 150 GB.
    How i will take the back up of the database using Simple export and Datapump utility
    We are using Red hat Linux OS and database version 10.2.0.4
    I am new to oracle .
    Thanks
    sanjaya

    >
    I have 60GB of mount point OS level but my estimated dumpfile size 150 GB.
    How i will take the back up of the database using Simple export and Datapump utility
    >
    Yes, use RMAN for backups. This export won't let you restore everything if your database crashes after an hour your export finished. You would have lost an hour of data. With RMAN, it will take hardly 1/5th of the export time and it is reliable when recovery situation occurs.
    By the way, if you use traditional export (not the datapump), you can use pipe to compress the data as the data gets exported.

  • IPhoto 09 Problem with Events and Dates

    In iPhoto 08, when adding pics and videos to an event (irrespective of date) they were all grouped together in a folder named by the Event.
    Now in iPhoto 09 when I add pics and videos they do not go into the event folder but a new folder named with a the date. In iPhoto everything is grouped together correctly but the underlying file structure is different.
    This is especially troublesome because when iMovie reads videos from the iPhoto library it labels the videos with the Date instead of the Event. I recently converted a bunch of my old videos (which now have the same creation date) and they are all mixed up now even though the are appropriately grouped in events in iPhoto.
    Does anybody know a workaround for this?

    Yes, I can access the movies in iMovie but they all are grouped by the same date. My old videos are grouped by the event name from iPhoto. It seems like iMovie 09 is reading from the iPhoto 09 library file structure which in the past had folders for each event but now only adds folders with the date name.

  • Problem with DateDiff and Date/Time value.

    The form below displayed perfectly until I added the line for DateDiff between VacStart and VacEnd. I receive an error message;
    An error occurred while evaluating the expression:
    #DateDiff("w", "VacStart", "VacEnd")#
    Error near line 48, column 51.
    Parameter 2 of function DateDiff which is now "VacStart" must be a date/time value
    The error occurred while processing an element with a general identifier of (#DateDiff("w", "VacStart", "VacEnd")#), occupying document position (48:50) to (48:86).
    I have tried changing the DB value to Date/Time as well as Text and get the same result each time. Any help would be greatly appreciated. Here is my code:
    <cfquery datasource="manna_premier" name="TM_log">
    SELECT *
    FROM TMStatusLog
    </cfquery>
    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
    <html xmlns="http://www.w3.org/1999/xhtml">
    <head>
    <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" />
    <title>TM Status Log</title>
    </head>
    <body>
    <p align="center" class="style4">Territory Manager Status Log </p>
    <table width="1448" border="0">
      <tr>
        <th width="71"><span class="style3">ID No.#</span></th>
    <th width="88"><span class="style3"> Date</span></th>
    <th width="166"><span class="style3">TM Name</span></th>
    <th width="129"><span class="style3">Status</span></th>
    <th width="139"><span class="style3">Vac Start </span></th>
    <th width="137"><span class="style3">Vac End</span></th>
    <th width="203">Ttl Days </th>
    <th width="203"><span class="style3">DSR Ride Along Name</span></th>
    <th width="274"><span class="style3">Service Call Name</span></th>
      </tr><cfoutput query="TM_log">
      <tr>
        <td><span class="style3">#ID#</span></td>
        <td><div align="center"><span class="style3">#DateFormat(LogDate, "mm/dd/yyyy")#</span></div></td>
        <td><div align="center"><span class="style3">#TerritoryManager#</span></div></td>
        <td><div align="center"><span class="style3">#Status#</span></div></td>
        <td><div align="center"><span class="style3">#DateFormat(VacStart, "mm/dd/yyyy")#</span></div></td>
        <td><div align="center"><span class="style3">#DateFormat(VacEnd, "mm/dd/yyyy")#</span></div></td>
    <cfif VacStart IS NOT "Null">
        <td><div align="center"><span class="style3">#DateDiff("w", "VacStart", "VacEnd")#</span></div></td>
    </cfif>
        <td><div align="center"><span class="style3">#DSRName#</span></div></td>
        <td><div align="center"><span class="style3">#ServiceName#</span></div></td>
      </tr></cfoutput>
    </table>
    </body>
    </html>

    Error Diagnostic Information
    An error occurred while evaluating the expression:
    #DateDiff("w", VacStart, VacEnd)#
    Error near line 47, column 48.
    Parameter 2 of function DateDiff which is now "" must be a date/time value
    The error occurred while processing an element with a general identifier of (#DateDiff("w", VacStart, VacEnd)#), occupying document position (47:47) to (47:79).
    Any other suggestions? Thanks!
    Well... what does the error message say?
    Parameter 2 of function DateDiff which is now "" must be a date/time value
    What does this tell you?  What is parameter 2 of that function call?  It's VacStart.  So it's saying VacStart is an empty string, and it's saying an empty string is not a valid value for that parameter.
    So the question you need to ask yourself is whether you expect VacStart to be an empty string?  Or do you perhaps expect it to be a date?  I would guess you're expecting it to be a date.  So the question becomes... Why is VacStart an empty string?
    Adam

  • Problem in Matrix while Populating Data

    Hello All,
    I am facing a problem in matrix column in SAP 8.8
    I am filling a matrix with data from a table ITT1.
    Matrix1 = oChlSelc.Items.Item("3").Specific
    oChlSelc.DataSources.DataTables.Add("Approval")
    oChlSelc.DataSources.DataTables.Item("Approval").Clear()
    Dim Str_Query As String = "Select Father,Code,Quantity From ITT1"
    oChlSelc.DataSources.DataTables.Item("Approval").ExecuteQuery(Str_Query)
    Matrix1.Columns.Item("col_1").DataBind.Bind("Approval", ",Code")
    Matrix1.Columns.Item("col_2").DataBind.Bind("Approval", "Quantity")
    matrix1.LoadFromDatasource
    The problem is that the data is getting populated in the matrix without any problem but the Quantity field is not showing the
    exact Value means its showing the RoundOff Value i.e. 1.45 = 1
    If Quantity is 1.45 in matrix column its showing 1 .
    That what i am facing .
    Thanks
    Amit

    Hi Amit
    Its working fine in my system
    and also im getting the quantity value as 1.00  by using this code
      Dim matrix1 As SAPbouiCOM.Matrix
            Dim oform As SAPbouiCOM.Form
            oForm = SBO_Application.Forms.Item("frm_test")
            matrix1 = oForm.Items.Item("1").Specific
            oForm.DataSources.DataTables.Add("Approval")
            oForm.DataSources.DataTables.Item("Approval").Clear()
            Dim Str_Query As String = "Select Father,Code,Quantity From ITT1"
            oForm.DataSources.DataTables.Item("Approval").ExecuteQuery(Str_Query)
            matrix1.Columns.Item("V_0").DataBind.Bind("Approval", "Code")
            matrix1.Columns.Item("V_1").DataBind.Bind("Approval", "Quantity")
            matrix1.LoadFromDatasource()
    Thanks
    Shafi

  • Get matrix row data and put it into header field with formatted search ???

    Hi All,
    I ask your help concerning the following:
    On an invoice matrix I want to check all Itemgroup Codes of all items in the rows, if there are some rows' items with ItemGroupcode 101 and some others with anything else, then header field should be Y, otherwise N.
    My main problem is: how do I put a matrix row data to a header data with checking all rows in the matrix?
    A minor problem is that I can't get the formatted search to work on all rows when the formatted search is assigned to the user field in header.
    If I put the formatted search to a row field then the row field is filled with the proper value, but the same query assigned to the user field in header works only on the first row.
    What am I doing wrong?
    SELECT USEDPROD= CASE T0.ItmsGrpCod  WHEN 101 THEN Y ELSE N END FROM .[OITM] T0 WHERE T0.ItemCode = $[$38.1.0]
    (SBO 7.6)
    Any suggestions are welcome.
    Thanks.
    Bálint

    Dear Adele,
    Thanks for the answer. The major one cannot be solved. OK, I'll try to get a workaround.
    However I still do not understand why my query does not work in all selected rows, just in the first row, i.e. if I assign the query to a header field and I'm positioned in the first row it's OK, but when I add a new item to the second row or any of the next rows, the header field is not updated at all.
    Why is that so? Do you have any idea?
    Bálint

  • Problems Receiving Emails and some data on WIFI since latest software update.

    Problems Receiving Emails and some data on WIFI since latest software update. 
    My blackberry torch will not receive emails on wifi since the latest software update, I've tried numerous wifi networks and several handsets, even had them replaced and received a brand new. Still the same problem. 
    When turning off the wifi network on the torch, the phone will stall for 5-10min while it tries to communicate with the network (vodafone) to receive outstanding emails and data. 
    Can someone please inform Blackberry and get a quick fix on this

    Sounds like you have Do NOt Disturb turned on.
    settings - do not disturb - off

  • Problem with BTE and FI-parking- no data from memory ID to ABAP

    Hi Experts,
    I have a scenario in Business Workflow where I want to catch the data(BKPF & BSEG) after SAP transaction processing - event is to change parked FI-document with FBV2. I´m trying to use BTE to receive the data after processing transaction.
    All BTE steps seems to be activated - because I can debug my functions when processing FBV2 - but I cannot reach any data into my coding after processing.
    I have created my project in BF24:
    ZXXX Testing: WF-memory ID
    I have also linked few FI-events to my BTE interface funtion in BF34:
    00001130 ZXXX ZXXX_FIPP_CHANGE_BTE
    00002213 ZXXX ZXXX_FIPP_CHANGE_BTE
    00002217 ZXXX ZXXX_FIPP_CHANGE_BTE
    My BTE function ZXXX_FIPP_CHANGE_BTE is as following:
    DATA: memid(15) VALUE 'ZXXX_2217'.
    *Initialize
    CLEAR: t_vbkpf,
    t_vbsegs.
    FREE MEMORY ID 'ZXXX_2217'.
    *Backtracking of BTE
    EXPORT t_vbkpf
    t_vbsegs
    TO MEMORY ID memid.
    ENDFUNCTION.
    I have also created funtion to call FBV2:
    data: memid(15) value 'ZXXX_2217'.
    CALL TRANSACTION 'FBV2'.
    *Import memory of BTE
    IMPORT t_vbkpf
    t_vbsegs
    FROM MEMORY ID memid.
    *Free memory id
    FREE MEMORY ID memid.  
    When I set a breakpoint to both of the functions and I execute FBV2 I reach the breakpoints. Problem is that the data is not passed from memid 'ZXXX_2217' into function after FBV2(this syntax: IMPORT t_vbkpf t_vbsegs FROM MEMORY ID memid.)
    What could be missing. So both functions are called but NO DATA is passed from memory ID to my internal tables? This seems to be a problem with memory ID´s. Also in my BTE function ZXXX_FIPP_CHANGE_BTE I receive a sy-subrc value 4 when executing syntax "FREE MEMORY ID 'ZXXX_2217'. ".
    All hints appreciated,
    Jani

    Hi Ramki,
    I must be frustrating You with these stupid questions... You have already supported me hugely to get more famailiar with BTE. For example not to commit in BTE etc. Big thanks for that!
    In all cases when I have tested my project in BF24 has been always active; and yes I have changed the pre. posted document always to trigger the BTE.
    I still have a problem. I copied all customizing and coding into other system - with same dissapointing results. Let me describe this once more:
    <b>My entry in BF24:</b>
    <i>Product:                 ZBE
    Text:                       Jani testing
    RFC destination:
    Active:   </i>               X
    <b>My entry in BF34:</b>
    <i>Event:                   00002214
    Product:                    ZBE
    Ctr:
    Appl:
    Function module:            ZBE_BTE</i>
    <b>My BTE function:</b>
    <i>  DATA: memid(15) VALUE 'ZBE_BTE2214'.
      DATA: t_vbkpf LIKE fvbkpf OCCURS 0 WITH HEADER LINE.
      FREE MEMORY ID memid.
      EXPORT t_vbkpf TO MEMORY ID memid.</i>
    <b>And the function ZBE_BTE_EXECUTE where I call FBV2:</b>
    <i>  DATA: memid(15) VALUE 'ZBE_BTE2214'.
      DATA: t_vbkpf LIKE fvbkpf OCCURS 0 WITH HEADER LINE.
      CALL TRANSACTION 'FBV2'.
      IMPORT t_vbkpf FROM MEMORY ID memid.
      FREE MEMORY ID memid.</i>
    When I process my function ZBE_BTE_EXECUTE - and I have added a breakpoint into ZBE_BTE - for FBV2 call <b>it does not reach the breakpoint</b> in ZBE_BTE at all! This occurs when using event linkage 00002214! And I do execute a real change to parked document.
    But if I change the entry in BF34 to be linked for event 00002217 or 00002218 process reaches my breakpoint in BTE function ZBE_BTE! But in these cases also, it does not import the data after transaction call in function ZBE_BTE_EXECUTE. When I continue debugging the coding in BTE function ZBE_BTE further into SAP-coding, I can see that my internal table t_vbkpf is filled. But when I leave SAP-coding when FBV2 is completed and return to ZBE_BTE_EXECUTE - where t_vbkpf should be filled from memory id - t_vbkpf is empty!
    Again as a conclusion in this system:
    For BTE event 00002214 this locig does not work at all. For events 00002217 and 00002218 it does get activated, but does not bring any data from memory ID into my abap. Strangest this is that I use absolutely same kind of coding that You.
    Can You see if I have to fill those empty fields( Ctr. & Appl.) in my BF34 customizing? Or is some other customizing action missing?
    Br,
    Jani

  • Problem with extract huge data in WEBI (Errors:WIO 30280 and ERR_WIS_30270)

    Hi gurus, need your help.
    When we run the query for the report we get an error in Web Intellegence .
    This error may be two types. And their rotation occurs randomly.
    First error: There is no memory available. Please close document to free memory (WIO 30280)
    Second error: An internal error occurred while calling the 'processDPCommands' API. (ERR_WIS_30270)
    When we try reopen WEB Intelligence, or reloading servers or machine, we have no   any results and get error again.
    I was try change different parametrs in universe designer for connection and universe, in central managment console for WEBI, but it's not solve problem
    This query select data from huge table (4,7 million rows), and when I use limit for maximum rows in WEBI my report working correctly.
    Is there any way to solve these problems associated with large data? What  is invoke this problem when we try to get full data?
    We use Business Objects Enterprise XI 3.1 version 12.4.0.966 
    Our system Windows Server  R2  Enterprise 2008 SP1 (64-bit system), Processor Intel 2.67 GHz (2 processors), RAM 8 Gb.                                                                               
    Thanks.
                                                                                    Ruslan

    Hi Brad,
    here we are talking about XI3.1 which has the limitations inherited from the operating system. as this blog post explains with plenty of details.
    In BI4.0 you can leverage larger datasets without any problems however you need to properly size and configure the services.
    There are several documents out there:
    How to configure the APS  http://scn.sap.com/docs/DOC-31711
    Companion Guide
    https://service.sap.com/~sapdownload/011000358700000307202011E/SBO_BI_4_0_Companion_V4.pdf
    Web Intelligence Sizing Guide
    https://service.sap.com/~sapdownload/011000358700001403692011E/BI_4_0_WEBI_NEW.pdf
    Best regards,
    Simone

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • My gradson got ahold of my grandaughters IPod touch and I no longer have a settings button.  The time and date on this is wrong.  How can I fix this problem.

    I need help...  I no longer have all of the buttons on my IPod touch screen.  My grandson got a hold of it and boom..  The time and date is wrong also, and I have no access to fix this problem.  There is no settings button.
    Any help would be appreciated.

    Assuming that by buttons your mean the icons for various apps are missing, use the Spotlight search feature (double click the Home button and go all the way to the left past the music controls) and search for: Settings. Go to Settings>General>Reset>Reset Screen layout and also correct the time inthe Genera settings.
    The Setting app can't be deleted or hidden via Restrictions.

  • Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).

    Hi,
    Our Environment is Essbase 11.1.2.2 and working on Essbase EAS and Shared Services components.One of our user tried to run the Cal Script of one Application and faced this error.
    Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).
    I have done some Google and found that we need to add something in Essbase.cfg file like below.
    1012704 Dynamic Calc processor cannot lock more than number ESM blocks during the calculation, please increase CalcLockBlock setting and then retry (a small data cache setting could also cause this problem, please check the data cache size setting).
    Possible Problems
    Analytic Services could not lock enough blocks to perform the calculation.
    Possible Solutions
    Increase the number of blocks that Analytic Services can allocate for a calculation:
    Set the maximum number of blocks that Analytic Services can allocate to at least 500. 
    If you do not have an $ARBORPATH/bin/essbase.cfg file on the server computer, create one using a text editor.
    In the essbase.cfg file on the server computer, set CALCLOCKBLOCKHIGH to 500.
    Stop and restart Analytic Server.
    Add the SET LOCKBLOCK HIGH command to the beginning of the calculation script.
    Set the data cache large enough to hold all the blocks specified in the CALCLOCKBLOCKHIGH setting. 
    Determine the block size.
    Set the data catche size.
    Actually in our Server Config file(essbase.cfg) we dont have below data  added.
    CalcLockBlockHigh 2000
    CalcLockBlockDefault 200
    CalcLockBlocklow 50
    So my doubt is if we edit the Essbase.cfg file and add the above settings and restart the services will it work?  and if so why should we change the Server config file if the problem is with one application Cal Script. Please guide me how to proceed.
    Regards,
    Naveen

    Your calculation needs to hold more blocks in memory than your current set up allows.
    From the docs (quoting so I don't have to write it, not to be a smarta***:
    CALCLOCKBLOCK specifies the number of blocks that can be fixed at each level of the SET LOCKBLOCK HIGH | DEFAULT | LOW calculation script command.
    When a block is calculated, Essbase fixes (gets addressability to) the block along with the blocks containing its children. Essbase calculates the block and then releases it along with the blocks containing its children. By default, Essbase allows up to 100 blocks to be fixed concurrently when calculating a block. This is sufficient for most database calculations. However, you may want to set a number higher than 100 if you are consolidating very large numbers of children in a formula calculation. This ensures that Essbase can fix all the required blocks when calculating a data block and that performance will not be impaired.
    Example
    If the essbase.cfg file contains the following settings:
    CALCLOCKBLOCKHIGH 500  CALCLOCKBLOCKDEFAULT 200  CALCLOCKBLOCKLOW 50 
    then you can use the following SET LOCKBLOCK setting commands in a calculation script:
    SET LOCKBLOCK HIGH; 
    means that Essbase can fix up to 500 data blocks when calculating one block.
    Support doc is saying to change your config file so those settings can be made available for any calc script to use.
    On a side note, if this was working previously and now isn't then it is worth investigating if this is simply due to standard growth or a recent change that has made an unexpected significant impact.

Maybe you are looking for

  • Where I can see all document of pay and the user that modified

    Hello people Where I can generate a report with all user that modified a document of pay. I have a problem, in the finance area, they request a reports with all users that modified some document of  pay. In the FB04 transaccion  I see the document of

  • Can't burn Cds anymore!msg:"songs on playlist will not fit on one audio cd"

    I recently updated my itunes library to 8--I believe--and since I haven't been able to burn any audio cds. All the settings are set correctly, the playlist size should fit on one cd, but it keeps telling me it won't fit on one cd. the cds are blank,

  • Messages in Web DYNPRO

    Hi Gurus, Can anybody help me to learn different types of messaging in Web Dynpro? And some links to learn all about messaging in web dynpro.

  • Colour profile problem?

    I'm using Indesign CS2 to create large numbers of web banners but have just noticed some odd colour behaviour after creating my templates. I prefer Indesign to Photoshop and Illustrator for this because it allows better control of text. Colour profil

  • 6 unions when not found  data I must return zero

    Hi I have a query SELECT CD_INDICADOR, VL_INDICADOR           FROM T_EC_PESSOA_PERFIL_CICLO PPC           WHERE PPC.NM_CICLO_OPERACIONAL = 200804 AND                 PPC.CD_ESTRUTURA_COMERCIAL = 1040 AND                 PPC.CD_TIPO_ESTRUTURA_COMERCIA