Problem with lack of knowledge to accessing data

Hi gurus,
Am a newbie struggling for survival in a job. I am being assigned a job where in the clients company has ECC 6, oracle, SAP 4.6 as their information hub. I am being told to produce a  document on what data to access from them. Could any of you throw light on what i need to do or if you could provide a template..i'll be grateful..
regards
Srinivas

Hi,
If its not feasible to create the view in the Source System itself, creating InfoSet in BW is the best option available.
You will have to populate two DSOs from two data sources, then just create the InfoSet and make a join on field 'b'.
Regards,
Yogesh.

Similar Messages

  • HT5361 Today, I experienced a problem with my mail. the time and date on each email received and sent is 18-06  and the date is 22nd July irrespective of the actual time

    Today, I experienced a problem with my mail. the time and date on each email received and sent is 18.06 and date as 22nd July. Thank you  John

    Incorrect date or time displayed in various applications

  • ACS 5.3 Authorization problem with using Identity Groups in Access Policy Rule

    Hello guys, I am found a problem which I can't solve regarding authorization with using Identity Groups in Access Policy rule.
    ACS version: 5.3.0.40.6 (internal build B.839)
    I have very simple RADIUS Authorization rule which authorize user on behalf of right Identity Group.
    Requested Identity Group exist
    Testing user is created in Internal Users and has assigned requested Identity Group
    Radius Access Policy: 
    Authentication against Identity Store Sequence, where authorization server is external RSA SecurID device and additional attributes retrieval is configured from Internal Users.
    Authorization is very simple – One Rule with only one Condition which is: Identity Group - in - Requested_Testing_Rule. Then Default rule is set to Deny.
    When I will try login with my testing user then authentication against RSA SecurID is OK, but authorization will be denied by Default rule – It looks like my Rule with Identity Group is totally omitted.
    I am managing several other ACS servers (version 5.3 but with older patches) where similar rules are working without problem.
    What I am tested:
    Remove testing user and create his account again.
    Rename Identity Group
    Use another Identity Group
    Remove Access Policy rule and create it again
    Use Compound Condition: System:Identity Group
    Use Compound Condition: System:UserID instead of Identity Group in Rule (it is working without problem)
    Do you have any idea where problem can be?

    OK guys, it started working yesterday without any configuration change. Maybe it was some database inconsistence wich was solved by ACS itself.

  • Problem with BTE and FI-parking- no data from memory ID to ABAP

    Hi Experts,
    I have a scenario in Business Workflow where I want to catch the data(BKPF & BSEG) after SAP transaction processing - event is to change parked FI-document with FBV2. I´m trying to use BTE to receive the data after processing transaction.
    All BTE steps seems to be activated - because I can debug my functions when processing FBV2 - but I cannot reach any data into my coding after processing.
    I have created my project in BF24:
    ZXXX Testing: WF-memory ID
    I have also linked few FI-events to my BTE interface funtion in BF34:
    00001130 ZXXX ZXXX_FIPP_CHANGE_BTE
    00002213 ZXXX ZXXX_FIPP_CHANGE_BTE
    00002217 ZXXX ZXXX_FIPP_CHANGE_BTE
    My BTE function ZXXX_FIPP_CHANGE_BTE is as following:
    DATA: memid(15) VALUE 'ZXXX_2217'.
    *Initialize
    CLEAR: t_vbkpf,
    t_vbsegs.
    FREE MEMORY ID 'ZXXX_2217'.
    *Backtracking of BTE
    EXPORT t_vbkpf
    t_vbsegs
    TO MEMORY ID memid.
    ENDFUNCTION.
    I have also created funtion to call FBV2:
    data: memid(15) value 'ZXXX_2217'.
    CALL TRANSACTION 'FBV2'.
    *Import memory of BTE
    IMPORT t_vbkpf
    t_vbsegs
    FROM MEMORY ID memid.
    *Free memory id
    FREE MEMORY ID memid.  
    When I set a breakpoint to both of the functions and I execute FBV2 I reach the breakpoints. Problem is that the data is not passed from memid 'ZXXX_2217' into function after FBV2(this syntax: IMPORT t_vbkpf t_vbsegs FROM MEMORY ID memid.)
    What could be missing. So both functions are called but NO DATA is passed from memory ID to my internal tables? This seems to be a problem with memory ID´s. Also in my BTE function ZXXX_FIPP_CHANGE_BTE I receive a sy-subrc value 4 when executing syntax "FREE MEMORY ID 'ZXXX_2217'. ".
    All hints appreciated,
    Jani

    Hi Ramki,
    I must be frustrating You with these stupid questions... You have already supported me hugely to get more famailiar with BTE. For example not to commit in BTE etc. Big thanks for that!
    In all cases when I have tested my project in BF24 has been always active; and yes I have changed the pre. posted document always to trigger the BTE.
    I still have a problem. I copied all customizing and coding into other system - with same dissapointing results. Let me describe this once more:
    <b>My entry in BF24:</b>
    <i>Product:                 ZBE
    Text:                       Jani testing
    RFC destination:
    Active:   </i>               X
    <b>My entry in BF34:</b>
    <i>Event:                   00002214
    Product:                    ZBE
    Ctr:
    Appl:
    Function module:            ZBE_BTE</i>
    <b>My BTE function:</b>
    <i>  DATA: memid(15) VALUE 'ZBE_BTE2214'.
      DATA: t_vbkpf LIKE fvbkpf OCCURS 0 WITH HEADER LINE.
      FREE MEMORY ID memid.
      EXPORT t_vbkpf TO MEMORY ID memid.</i>
    <b>And the function ZBE_BTE_EXECUTE where I call FBV2:</b>
    <i>  DATA: memid(15) VALUE 'ZBE_BTE2214'.
      DATA: t_vbkpf LIKE fvbkpf OCCURS 0 WITH HEADER LINE.
      CALL TRANSACTION 'FBV2'.
      IMPORT t_vbkpf FROM MEMORY ID memid.
      FREE MEMORY ID memid.</i>
    When I process my function ZBE_BTE_EXECUTE - and I have added a breakpoint into ZBE_BTE - for FBV2 call <b>it does not reach the breakpoint</b> in ZBE_BTE at all! This occurs when using event linkage 00002214! And I do execute a real change to parked document.
    But if I change the entry in BF34 to be linked for event 00002217 or 00002218 process reaches my breakpoint in BTE function ZBE_BTE! But in these cases also, it does not import the data after transaction call in function ZBE_BTE_EXECUTE. When I continue debugging the coding in BTE function ZBE_BTE further into SAP-coding, I can see that my internal table t_vbkpf is filled. But when I leave SAP-coding when FBV2 is completed and return to ZBE_BTE_EXECUTE - where t_vbkpf should be filled from memory id - t_vbkpf is empty!
    Again as a conclusion in this system:
    For BTE event 00002214 this locig does not work at all. For events 00002217 and 00002218 it does get activated, but does not bring any data from memory ID into my abap. Strangest this is that I use absolutely same kind of coding that You.
    Can You see if I have to fill those empty fields( Ctr. & Appl.) in my BF34 customizing? Or is some other customizing action missing?
    Br,
    Jani

  • Problems with 3.1.2 download, & iTunes dates...

    So I purchased the 3.1.2 update and it did not download. It went to download 3.1.1. strange... It does not download citing a network connection. On my end I'm ok because you are reading my post so could it be from apple's side? Also, purchased it today, however the date it gives is 11.30.09!?! There are discrepancies with its recorded dates regarding my reviews. (only one is correct, the rest all say the same date) I reported the problem, is there any insight that you can give to this problem? These seem like problems that could not happen, especially the dates.
    Help : (

    I am also in the same boat with error 8247...tried everything I could think of last night, then gave up thinking maybe it was on the other end and would try in the morning. I am getting the same result today as well. The part that's making me more frustrated is that none of the apps that I downloaded will open anymore, but I'm still getting my email and able to browse the internet.

  • Problem with native SQL cursor in generic data source

    Hi, All!
    I am implementing generic data source based on FM.
    Because of complicated SQL I canu2019t use Open SQL and RSAX_BIW_GET_DATA_SIMPLE-example u201Cas isu201D.
    So, I have to use Native SQL. But Iu2019ve got a problem with a cursor. When I test my data source in RSA3, everything is Ok. But, if I start appropriate info-package, I get error u201CABAP/4 processor: DBIF_DSQL2_INVALID_CURSORu201D. It happens after selecting of 1st data package in line u201CFETCH NEXT S1 INTOu2026u201D. It seems to me that when system performs the second call of my FM the opened cursor has already been disappeared.
    Did anyone do things like this and what is incorrect?
    Is it real to make generic data source based on FM with using Native SQL open, fetch, closeu2026

    Hi Jason,
    I don't think this SQL is very valuable It is just an aggregation with some custom rules. This aggregation is performing on info-provider which consists of two info-cubes. Here we have about 2 billion records in info-provider and about 30 million records in custom db-table Z_TMP (certainly, it has indexes). I have to do this operation on 21 info-providers like this and I have to do this 20 times for each info-provider (with different values of host-variable p_GROUP)
    SELECT T.T1, SUM( T.T2 ), SUM( T.T3 ), SUM( T.T4 )
            FROM (
                    SELECT F."KEY_EVENT06088" AS T1,
                            F."/BIC/EV_COST" + F."/BIC/EV_A_COST" AS T2,
                            DECODE( D.SID_EVENTTYPE, 23147, 0,
                                                          23148, 0,
                                                          23151, 0,
                                                          23153, 0,
                                                          23157, 0,
                                                          23159, 0,
                                                          24896734, 0,
                                                          695032768, 0,
                                                          695029006, 0,
                                                          695029007, 0,
                                                          695036746, 0, F."/BIC/EV_COST") +
                              DECODE( D.SID_EVENTTYPE, 23147, 0,
                                                          23148, 0,
                                                          23151, 0,
                                                          23153, 0,
                                                          23157, 0,
                                                          23159, 0,
                                                          24896734, 0,
                                                          695032768, 0,
                                                          695029006, 0,
                                                          695029007, 0,
                                                          695036746, 0, F."/BIC/EV_A_COST") AS T3,
                            DECODE( D.SID_EVENTTYPE, 23147, F."/BIC/EV_DURAT",
                                                          23148, F."/BIC/EV_DURAT",
                                                          23151, F."/BIC/EV_DURAT",
                                                          23153, F."/BIC/EV_DURAT",
                                                          23157, F."/BIC/EV_DURAT",
                                                          23159, F."/BIC/EV_DURAT",
                                                          24896734, F."/BIC/EV_DURAT",
                                                          695032768, F."/BIC/EV_DURAT",
                                                          695029006, F."/BIC/EV_DURAT",
                                                          695029007, F."/BIC/EV_DURAT",
                                                          695036746, F."/BIC/EV_DURAT", 0) AS T4
                      FROM "/BIC/VEVENT0608F" F,
                           Z_TMP G,
                           "/BIC/DEVENT06085" D
                      WHERE F."KEY_EVENT06088" = G.ID
                            AND F."KEY_EVENT06085" = D.DIMID
                            AND G.GROUP_NO = :p_GROUP
                            AND ( F."/BIC/EV_COST" < 0 OR F."/BIC/EV_A_COST" < 0 )
                            AND D.SID_EVENTTYPE <> 695030676 AND D.SID_EVENTTYPE <> 695030678
                    UNION
                    SELECT F."KEY_EVNA06088" AS T1,
                            F."/BIC/EV_COST" + F."/BIC/EV_A_COST" AS T2,
                            DECODE( D.SID_EVENTTYPE, 23147, 0,
                                                          23148, 0,
                                                          23151, 0,
                                                          23153, 0,
                                                          23157, 0,
                                                          23159, 0,
                                                          24896734, 0,
                                                          695032768, 0,
                                                          695029006, 0,
                                                          695029007, 0,
                                                          695036746, 0, F."/BIC/EV_COST") +
                              DECODE( D.SID_EVENTTYPE, 23147, 0,
                                                          23148, 0,
                                                          23151, 0,
                                                          23153, 0,
                                                          23157, 0,
                                                          23159, 0,
                                                          24896734, 0,
                                                          695032768, 0,
                                                          695029006, 0,
                                                          695029007, 0,
                                                          695036746, 0, F."/BIC/EV_A_COST") AS T3,
                            DECODE( D.SID_EVENTTYPE, 23147, F."/BIC/EV_DURAT",
                                                          23148, F."/BIC/EV_DURAT",
                                                          23151, F."/BIC/EV_DURAT",
                                                          23153, F."/BIC/EV_DURAT",
                                                          23157, F."/BIC/EV_DURAT",
                                                          23159, F."/BIC/EV_DURAT",
                                                          24896734, F."/BIC/EV_DURAT",
                                                          695032768, F."/BIC/EV_DURAT",
                                                          695029006, F."/BIC/EV_DURAT",
                                                          695029007, F."/BIC/EV_DURAT",
                                                          695036746, F."/BIC/EV_DURAT", 0) AS T4
                    FROM "/BIC/VEVNA0608F" F,
                         Z_TMP G,
                         "/BIC/DEVNA06085" D
                    WHERE F."KEY_EVNA06088" = G.ID
                          AND F."KEY_EVNA06085" = D.DIMID
                          AND G.GROUP_NO = :p_GROUP
                          AND ( F."/BIC/EV_COST" < 0 OR F."/BIC/EV_A_COST" < 0 )
                          AND D.SID_EVENTTYPE <> 695030676 AND D.SID_EVENTTYPE <> 695030678
                 ) T
            GROUP BY T.T1

  • Problems with Importing / Exporting Keywords and Meta Datas

    Hi to all!
    I recently upgraded to Aperture 3 and upgraded my referenced library.
    Today I opened the Keyword HUD and noticed some keywords splattered in my list, which seem to be any older ones, since the new numbering indicated that they are not applied to any pictures.
    So I deleted them.
    Than I noticed the <Imported Keywords> folder, opened it, and it contained also a large number of previous keywords. Also they seemed not to be in use, so I also removed them.
    Than I locked the Keyword HUD.
    Now my question: If I export a version, with the option for 'include metadata' ticked, and edit the version than in Photoshop, and after it, import it back into the Aperture library I do have the problem.
    I have tried 'Import Meta Datas' and and click 'Append'. Than it is recognizing the former keywords, which I would appreciate, but not as 'Imported Keywords'.
    If I would open the file w/ the External Editor and return it back, I guess I would not have this problem. Usually I do open the referenced RAW file, and import than the edited version back into Aperture. Keywords are stored in the Library, so I would not avail the former given keywords than, is that right?
    By the way are the keywords part of Meta Datas or not?
    Are there any workarounds?
    And had other people also problems with their keyword list after upgrading?
    Thank for any ideas / infos!
    Michael

    Added header to CSV and to Code
    $ImportFile = Import-Csv "C:\Users\username\Desktop\Scripts\Powershell\Epic\SCCM CI\Tags.csv" -Header Computer
    foreach ($Computer in $ImportFile){
    $path = "\\$Computer\c$\Epic\bin\7.9.2\Epic Print Service"
    $xml = select-xml -path "$path\EpicPullService.config.xml" -xpath //EpicPullService//Cleanup | Select -ExpandProperty Node
    if ($xml.ArchiveHours -eq '12' -and $xml.DeleteHours -eq '120') {
    $Compliance = $True
    }Else{
    $Compliance = $False
    } "$Computer","$Compliance" | Export-Csv "C:\Users\username\Desktop\Scripts\Powershell\Epic\SCCM CI\Results.csv"
    Results:
    select-xml : Cannot find path '\\@{Computer=SW1412-16985}\c$\Epic\bin\7.9.2\Epic Print Service\EpicPullService.config.xml' because it does not exist.
    At C:\Users\username\Desktop\Scripts\Powershell\Epic\SCCM CI\Check_PullServiceXML.ps1:4 char:8
    + $xml = select-xml -path "$path\EpicPullService.config.xml" -xpath //EpicPullServ ...
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        + CategoryInfo          : ObjectNotFound: (\\@{Computer=SW...vice.config.xml:String) [Select-Xml], ItemNotFoundException
        + FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.SelectXmlCommand
    If it is not a CSV file then just get it with Get-Content
    Get-Content C:\Users\UserName\Desktop\Scripts\Powershell\Epic\SCCM CI\Tags.csv |
        ForEach-Object{
            $computer=$_
            $path ="\\$computer\c$\Epic\bin\7.9.2\Epic Print Service\\EpicPullService.config.xml"
    ¯\_(ツ)_/¯

  • Problems with Delta Extraction for 0CRM_OPPT_H (no data found)

    Hi,
    I've some problems with the Delta Extraction of the Infosource 0crm_oppt_h (CRM Opportunities Header). After initialization I get no delta data from the CRM system.
    What I already did:
    Activated 0crm_oppt_h Data Source (checked functionality with rsa3)
    Started Info Package (Init) on BW side (worked fine)
    Checked the status of the Data Source on the CRM system using BWA7 ("initial upload" is unmarked; "delta active" is marked and what makes me worry is that the column "Queue exists" in <i>unmarked</i>...)
    If I change anything (like Phase, Expected Sales Vol.) in the opportunity, the Delta Extraction get no changes.
    Could You help me out, please?
    Best regards,
    Markus Svec

    hi Markus,
    try to check oss note 788172
    Release Status Released for Customer
    Released on 23.03.2005
    Priority Correction with high priority
    Category Program error
    Symptom
    No data exists in delta extraction from the CRM server to the BW system for business transactions, if parallel processing is applied as per note 639072. But Data is extracted if parallel processing is switched off.ie. when BWA_NUMBER_OFF_PROCESSES is set to 1,there is data during delta. This applies to the following DataSources:
    0BBP_TD_CONTR_1
    0CRM_COMPLAINTS_I
    0CRM_LEAD_ATTR
    0CRM_LEAD_H
    0CRM_LEAD_I
    0CRM_OPPT_ATTR
    0CRM_OPPT_H
    0CRM_OPP T_I
    0CRM_QUOTATION_I
    0CRM_QUOTA_ORDER_I
    0CRM_SALES_ACT_1
    0CRM_SALES_CONTR_I
    0CRM_SALES_ORDER_I
    0CRM_SRV_CODES
    0 CRM_SRV_CONFIRM_H
    0CRM_SRV_CONFIRM_I
    0CRM_SRV_CONTRACT_H
    0CRM_SRV_PROCESS_H
    0CRM_SRV_PROCESS_I
    Other terms
    DataSources, BWA, initial extraction, delta init, parallel processing, no data in delta.
    Reason and Prerequisites
    There is an update on the generated delta table which causes data corruption in running delta initializations as the changed delta sets will be deleted with every further update on documents. An open cursor statement is there without fetch data in SMOX3_GET_DATA.
    Solution
    The problem is solved with the attached corrections.After applying the corrections a new initialization of the affected datasources is necessary.

  • Problem with Content Server 4 keystore access on Ubuntu 8.04

    Hello,
    Setting up the Content Server I encounter this problem with the fulfillment server Status check-up:
    exception
    javax.servlet.ServletException: Servlet execution threw an exception
    root cause
    java.lang.Error: Problem reading key and certificate from keystore
         com.adobe.adept.fulfillment.security.ServerConfig.init(ServerConfig.java:201)
         com.adobe.adept.fulfillment.security.ServerConfig.getSigningURL(ServerConfig.java:48)
         com.adobe.adept.fulfillment.servlet.FulfillmentServerStatus.getServers(FulfillmentServerStatus.java:34)
         com.adobe.adept.common.servlet.Status.checkUp(Status.java:355)
         com.adobe.adept.common.servlet.Status.doGet(Status.java:421)
         javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
         javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
    I've created operator.p12 according to the instructions in the Quickstart guide
    and placed it in /etc where it is accessible by the server. I used OpenSSL 0.9.8k
    for this.
    I can use "openssl pkcs12 -in operator.p12 -out file.pem" to view the contents of
    the file.
    My Content Server fulfillment configuration is as follows:
    com.adobe.adept.init1=com.adobe.adept.shared.util.SharedInitialization
    com.adobe.adept.log.level=trace
    com.adobe.adept.log.file=/var/log/fulfillment.log
    com.adobe.adept.persist.sql.driverClass=com.mysql.jdbc.Driver
    com.adobe.adept.persist.sql.connection=jdbc:mysql://127.0.0.1:3306/adept
    com.adobe.adept.persist.sql.dialect=mysql
    com.adobe.adept.persist.sql.user=ereading
    com.adobe.adept.persist.sql.password=********
    com.adobe.adept.fulfillment.security.licensesignURL=https://eusigningservice.adobe.com/licensesign
    com.adobe.adept.fulfillment.security.keystore.user=operator
    com.adobe.adept.fulfillment.security.keystore.password=********
    com.adobe.adept.fulfillment.security.pkcs12.file=file:///etc/operator.p12
    com.adobe.adept.serviceURL=http://******.dmz.******.org/fulfillment
    Any ideas?
    Best regards,
    Teemu

    for solve this, change  this
    com.adobe.adept.fulfillment.security.pkcs12.file=file:///etc/operator.p12
    for this
    com.adobe.adept.fulfillment.security.pkcs12.file=/etc/operator.p12

  • Problems with the EM when it access the instance

    hi,
    can you help me ??? for almost 3 days, i have tried to open the EM after startup my instance, as i used to do... then, now i can't start my EM, because when a try to do this, it show me that the instance is stoped, the instance is in shutdown mode... ( i started the instance before !!! )
    so, i startup the instance by EM control, but the same thing happen, after i start the instance by EM control, it show me that the instance is stoped.
    recently the passwords have been changed at my database, system's password, sysman, and others users. are there any connection between the EM and the password's modification ?
    the follow message show me when i try to startup the database by OEM:
    RemoteOperationException: File does not exist or is inaccessible.
    thanks
    Message was edited by:
    Paulo_BR

    I've done that many times also i have restore the iphone software. It seems to be a problem with the modem...im desperate i want to throw the iphone trought the window jeje

  • Problems with lack of connection between macs on a network.

    For about a month i've had both my macbook and Mac Pro connecting fine over my network but suddenly they don't recognise each other.Is this a problem with my proxie settings within the Network Settings??.Sorry if this question is answered on the forum but i can't find it.
    Message was edited by: D-Fiance

    If the endpoint doesn't answer the first call, the expected behaviour should be that the TP server tries dialling again as many times as you've set in TMS, but it's only trying once for me.
    This might be of interest to you, why TMS might only dial out once to an endpoint, see my reply near the end of the discussion.
    conductor-telepresence-server-dial-out-and-redial-when-no-one-answer

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • Problem with: Upload file thru Web access.

    We have configure iFS 1.1 on AIX 4.3. Everything is running fine like ifsstart, ifsmgr. Its showing me all protocols and agents running.
    I am able to access iFS through web access program and its accepting login and password. After login in to iFS server when we try to upload any file using Brower option its gives us following error.
    Request URI:/ifslogin/jsps/upload2.jsp
    Exception:
    java.lang.NoClassDefFoundError: java/sql/Blob
         at oracle.jdbc.driver.OracleStatement.get_blob_value(Compiled Code)
         at oracle.jdbc.driver.OracleStatement.getBLOBValue(Compiled Code)
         at oracle.jdbc.driver.OracleStatement.getObjectValue(
    Can anybody guide us. We are not having JAVA knowledge.
    Thanks
    Dilip

    Hi,
    create an attribute(say ca_fileupload) of  type XSTRING  under context and bind that attribute with the data property.
    For binding the attribute just Click on the element file_up , there you can see data property,click on that and you can see all the nodes and attributes  you have created and there click on the attribute(ca_fileupload).

  • Problems with email setup - Can't access security check

    Hi all, i've recently aquired a BB Curve 9300 and managed to setup everything but I still have an unsolved problem:
    i set up a gmail email account from my provider mobile email setup page (http://mobileemail.vodafone.it) and asked to sync both contacts and calendars.
    The email was correctly set up as i can read and send emails but syncronization doesn't work.
    My provider's site says:
    If you have not already done so, to start calendar and contact synchronization, complete the security activation on your BlackBerry® device:
    1. On the Home screen, click the Setup icon and click Email Accounts or Email Settings.
    2. After you open the application, the security activation starts. When security activation is complete, calendar and contact synchronization automatically begins.
    If i try to do just that (open the menu, click on the set-up icon and then choose the "Email Accounts" icon) the device quickly shows a hourclass icon, hide it and then does nothing.
    If i try to access "Email Account Management" from inside the mailbox options i get the following error message:
    "unable to open email set-up application. contact your wireless service provider"
    I tried contacting my wireless service provider both in-shop and on the phone and their theory is that the email setup application is too new and is not compatible with my version of the blackberry OS (which is 6.0 bundle 2949, the device says it's up to date).
    I tried a battery pull but didn't help.
    Any suggestion would be really appreciated.
    Thanks,
    Marcello

    Gotcha.
    Are you able to access the email account from https://bis.eu.blackberry.com/html?brand=vodauk
    If so, delete the email account from there then try adding it using the integrate link on the phone.
    Please click the Thumbs Up icon if this comment has helped you!
    If your issue is resolved, please click the solution button on the resolution!
    Every BlackBerry should have BlackBerry Protect, get it now! | Follow me on Twitter | Bring Back BBM Music!

  • Library problems with two XP user accounts accessing music in shared folder

    I'm sorry if this has already been answered; I have not been able to find the answer. My wife and I each have a different user account on the same Windows XP machine. I recently combined our iTunes libraries into the shared folder on the computer. I correctly pointed iTunes (in each of our accounts) to the correct location for the music files. I did an "Add Folder to Library" to make sure each account had all of the music. When I did this, I got duplicate listings of all of the songs that also had exclamation points. When I did a "Get Info" on these songs, I found that each song was correctly pointed to the correct file location. I laboriously deleted each "exclamation point" song one at a time (in each account). Now the problem I have is that whenever my wife or I close iTunes and the other person opens iTunes in their XP user account, none of the songs can be found. The best way I've been able to resolve this is to (unfortunately) delete the entire library, and do an "Add Folder To Library" to find all of the song locations and include all of the songs back into the library. Surely this can't be how this is supposed to work.
    So, my questions are:
    - Do I need to add new albums / songs one at a time to avoid duplicates (rather than use the "Add Folder" for the entire iTunes music folder again)?
    - How do I setup iTunes so that my wife and I can each go into iTunes in our seperate XP accounts and have all of our songs right where we left them (WITHOUT having to delete the entire libary and "add" again.)
    FYI: I'm using the latest version of iTunes and (again) I'm using Windows XP.
    My wife and I really want to each get a new iPod Nano, but if I can't trust iTunes... forget it.
    THANKS!!!

    I have this setup at working. Two accounts with all files stored on a shared folder, mine is a network drive. Each account has its own library and can access all files. We also use one iTunes account for rare purchases.
    This is how did initial setup:
    Make a folder in the shared folder. I called mine itunes.
    This is where iTunes stores its files.
    With second account point iTunes at shared folder.
    Import the folder you created earlier.
    This worked for me without duplications. The only complication is that when importing from CD with one account the other doesn't know it. The other account must do import on the folder for the CD just imported. We keep track of this with a spreadsheet updated to show which account has which CD files.

Maybe you are looking for

  • Pages 5.1 error when trying to open documents

    Hey everyone, So I'm running Pages 5.1 and today, for the first time and seemingly out of nowhere, Pages has decided it will not open any files I have stored in iCloud or saved to my desktop barring one exception. If I send a document from iCloud via

  • Tracking ABAP web dynpro events in back end SAP

    Hi Experts , I have Is there any way to capture the events on th ABAP web dynpro in the backend. The scenario is that i have some particular code which gets executed in the back end when User clicks on any button or tab in the appraisal document , wh

  • IOS 4 Alarm Bug

    The time changed back to Standard Time this past weekend. My 3rd Gen. Touch readjusted it's clock without a problem, but now my alarms are an hour late. For instance, today, I woke up on my own at 6:42. My 6:30 alarm is set, but no alarm. My alarm wi

  • Adding a past holiday in Holiday Calendar

    Hi, We have a Holiday Calendar for our organisation. However, 14th April being notified later was not added in the Calendar. I want to add it now in such a way that the IT2001 (which is already updated as Not Marked for all the employees for 14th Apr

  • Mac Server Only in mixed environment?

    I want to get rid of my Windows 2003 server and have my Mac Server act as directory server for both my Macs and Windows PCs. Is this possible?