Storing instance-related data on C side.

Hello!
I have a question on JNI:
In my class all methods are native. I need to store some instance-related data for my processing.
Is there a way to do this with less overhead than by placing this data to Java class and accessing them through JNI?
I don't need to access to this fields from Java, only from C.
I.e. the solution that I have now is:
Java:
public class stream {
    private int handle = -1;
    public native int open(byte[] szFilename, int mode);
    public native int read(byte[] buffer, int offset, int length);
C:
JNIEXPORT jint JNICALL stream_open (JNIEnv *env , jobject obj , jbyteArray szFilename, jint mode) {
  jclass cls = (*env)->GetObjectClass(env, obj);
  jfieldID fid;
  jint handle;
  fid = (*env)->GetFieldID(env, cls, "handle", "I");
  if (fid == 0)
    return;
  handle = <open>
  (*env)->SetIntField(env, cls, fid, handle);
JNIEXPORT jint JNICALL stream_read (JNIEnv *env, jclass cls, jbyteArray buffer, jint offset, jint length)  {
  jclass cls = (*env)->GetObjectClass(env, obj);
  jfieldID fid;
  jint handle;
  fid = (*env)->GetFieldID(env, cls, "handle", "I");
  if (fid == 0)
    return;
  handle = (*env)->GetIntField(env, cls, fid);
  ... (perform read)
Is it possible to make this more effective, without having 'int handle' in Java? Certainly, it should be different for each instance.
Thank you in advance
Sergey

Your native code is just subroutines, and "normally" data kept around is either local to the subroutine (and disappears between calls), or is global (and so not instance data).
What is not clear from your description is whether you are talking about a) java "instances" for which you want to hold extra data on the C side, or b) you simply have a bunch of C data that you would like to keep around - instances perhaps of some C++ class.
If it is the former, then the likely options are
o Pass the data back to the java side, and hold it in java objects.
o Allocate space on the C side, and pass a pointer to the java object to hold it. (If you do this, save the pointer as a "long".)
o Allocate the space on the C side, setting up some sort of lookup structure, and holding a key in each java object.
One thing to remember is that you need a way to cleanly dispose of C-side data.

Similar Messages

  • How the HR related data of BP is stored in SAPCRM Sys??

    Hi Guru's,
    I' m very new to the CRM.
    I want to know <b>How the HR related data of BP is stored in SAPCRM System?? for the project requirement.</b>
    Please, can anybody help in this regard ??
    Regards,
    Arjun

    hi,
    I don;t entirely understand your question.
    But just as in R/3 HR data is stored in infotypes, accesible as always with PPOM, but for CRM, there's also PPOMA_CRM.
    If you check the evaluation paths in OOAW, you also see some specific ones for BP. this is also an entity in HR, just as P, O CP etc.
    In short, data is stored as relations eventually in table HRP1001
    Kind regards, Rob Dielemans

  • How to generate XML from relational data : PL/SQL or Java

    I'm new to Oracle XML and would appreciate some advice. I've been asked to generate XML documents from data stored in relational tables. The XML documents must be validated against a DTD. We will probably want to store the XML in the database.
    I've seen a PL/SQL based approach as follows :
    1.Mimic the structure of the DTD using SQL object types 2.Assign the relational data to the object type using PL/SQL as required
    3.Use the SYS_XMLGEN package to render the required XML documents from the SQL objects
    However, creating the object types seems to be quite time consuming (step 1 above) for anything other than the simplest of XML documents.
    I've also seen that there is the Java based approach, namely :
    1. Use the XML generator to build Java classes based on a DTD.
    2. Use these classes to build the required XML
    On the face of it, the Java based approach seems simpler. However, I'm not that familiar with Java.
    Which is the best way to proceed ? Is the PL/SQL based approach worth pursuing or should I bite the bullet and brush up my Java ?
    Is it possible to use a combination of PL/SQL and Java to populate the dtd generated java classes (step 2 of the Java approach) to reduce my learning curve ?
    Thanks in advance

    To help answer your questions:
    1) Now, in 9iR2, you can use SQL/XML as another choice.
    2) You can also use XSU to generate the XML and use XSLT to transform it to a desired format instead of using object views if possible.
    3) XDK provide Class generator support to populate XML data to Java classes.

  • Keynote storing obsolete format data, and possible connection to crashes

    An “Ah ha!” moment with Keynote leading to a question about how it stores information regarding themes and fonts, and whether this is a bug in Keynote and/or if there is something that end users can do as a work-around. These points came to my attention today when I exported from Keynote to PowerPoint to share a (very inferior ) version of my presentation with a colleague. Note that the Keynote file I exported from has not exhibited any problems related to error messages or crashes.
    (1) Prior to making the export, I changed all slides to Keynote’s standard out-of-the-box “Gradient” theme since I had used Keynote Theme Park animated “Global Cool” theme (www.keynotethemepark.com). However, even though no slide any longer contained the KTP theme, in doing the export Keynote created a media folder of movies that contained the animated KTP theme. So Keynote is apparently storing obsolete file data and not cleaning up its house after the file is changed.
    (2) Prior to making the export, I changed all slides to use only fonts that I knew to be on my colleague’s PC. However, when I looked under the “Contents” tab of the resulting Powerpoint file’s “Properties” info, I discovered that Powerpoint was listing all the fonts that had been in the original Keynote file but that I had long since changed to Arial and Arial Narrow. Checking through the Powerpoint file slide by slide confirmed that the original fonts do not appear. Again, it appears that Keynote is storing obsolete information.
    MY QUESTION: I am wondering if what I discovered today about Keynote’s “elephant memory” is indicating a bug of some sort, and if so, is there a reasonable work-around for end users that will enable us to avoid having files bloated with obsolete data.
    Also, I am wondering if this storing of obsolete file data could be related to the ”missing file” error reports and Keynote crashes that others have noted in this forum and that I addressed in my previous post. The solution I had discovered was a file level work-around, but the question remained as to what had brought it on.
    BTW, this is with Keynote 3.0.0 because I reinstalled Keynote recently thinking that this procedure might resolve the crashing problem described in my previous post to this forum; the reinstall didn’t, but the problem was solved at a file level in the manner described in my previous post. Now that I finally have time tonight to upgrade again to Keynote 3.02, when I run Software Update expecting to see the Keynote 3.02 upgrade listed, it’s not there. Hmmmmm.… I know I can get it from Apple’s download site but since the site lists two downloads (3.0.1v2 and 3.0.2), I thought it’d be better to let software update manage this process. Now I’m not so sure.

    Thanks, dook and Kyn. Not only does that seem to resolve the issues, but deleting unused master slides dramatically drops the size of the exported presentation (e.g. to 20%).
    I wish the Apple Keynote team would provide a way to more intelligently delete unused themes so as to make it easier to email presentations. It seems to me that theere should be a setting available in the export process where one could do this. The only way I could see to do it was one master slide at a time. Please tell me if I am missing something here.
    I also wish we could export just a range of slides, e.g. from # to #, or slides selected in the Navigator/sorter/organizer.

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • Using relational data from SQL data source in Planning and Essbase

    Hi,
    How do I take sample data from a SQL data source and bring it into a Hyperion Planning application? I understand that when creating Planning applications, a link between a relational data source and Essbase must be established, because the relational database holds the metadata while the database outline is stored with Essbase. However, all I am currently able to do is load the data into Planning applicaitons via EAS, where I right-click on the Application database, hit load data, and select from either a .txt file or an excel file. Do I need Oracle Data Integrator? Any help or insight would be greatly appreciated, as well as corrections to any incorrect assumptions I may have made in this post. Thank you.

    When you import your file (Excel or text), you're importing it using a Load Rule in EAS. To load from SQL, you simply create a SQL load rule. You'll load data the exact same way (via EAS), but with a different type of load rule. The load rule will contain the SQL that queries the database. You can preview your data in the load rule the same way you would with a file.
    If your SQL is very complex, I'd recommend creating a view and loading from that view. But otherwise it's pretty straight-forward.
    The only catch is that you need to configure a database connection (to your relational database) on the Essbase server. The Essbase DBA guide will show you how to do this.
    You COULD use ODI, but I tend to only use it for loading metadata.
    Hope this helps,
    - Jake

  • How to create a table which contains relational data and Document data

    hai all
    i need to create a table which contains relational data(i mean coulumns whose data types are type NUMBER,VARCHAR) and documents(like xml file/html file/image)using iFS.
    when i store the document data(xml data/html data) in the iFS ,it will be stored as Document Object.so how do i relate this document object belongs to a particular row in a table.
    do guide me
    thanks

    Please see reply at http://technet.oracle.com:89/ubb/Forum36/HTML/000778.html

  • Po tax related data

    hi experts,
                   can u tell me where the tax related data is stored(tables) for the po created

    Kiran,
    KONV-KSCHL = Condition type.
    KONV-KBETR = Condition value.

  • Linking relational data to folders in XMLDB Repository by using metadata

    Hi,
    We want to use the XML DB Repository to store documents (PDF, Word, etc) belonging to customers, dossiers of customers, invoices of customers, etc. To accomplish this we are thinking of a folder hierarchy with the first level being customer folders, the second level dossier / invoice folders and within each of these folders the relevant documents / other folders. When querying a customer by sql, we want to determine the correct folder in the repository by storing the primary key of the customer row as user meta data to that folder. After this we get the folders and documents under this folder with the under_path function. Some folders represent a dossier / invoice folder and with this folder the primary key of the dossier / invoice is stored via meta data. While querying these folders by using sql we want to retrieve additional info which is stored in the relational tables: Customer info within a customer table, dossier info within a dossier table, invoice info with an invoice table, etc. Theoretically all available info must be retrieved, so this info preferably must not be stored as metadata (only the primary key and type to these rows and tables).
    My question: Is this the right way to go or are we going to face problems with this architecture? Is there a need to store all info as metadata or can it be done as I describe? So we want to link info from different tables to folders / documents in the repository. Because each folder can have metadata pointing to different tables we are facing (even with a small data set) performance issues. Can someone point me in the right direction?
    Thanks,
    Piotr Chabot Stadhouders
    Timeff
    The Netherlands

    Here's an example, tell me if this is what you need.
    Setup : the following creates a table to store a specific type of metadata, two folders, and finally creates a resource (JPEG image) and its associated metadata :
    SQL> create table character_metadata (
      2    character_id   number(6)
      3  , character_name varchar2(80)
      4  , origin         varchar2(80)
      5  , category       varchar2(30)
      6  );
    Table created.
    SQL> declare
      2    res boolean;
      3  begin
      4    res := dbms_xdb.CreateFolder('/ComicBooks');
      5    res := dbms_xdb.CreateFolder('/ComicBooks/Characters');
      6  end;
      7  /
    PL/SQL procedure successfully completed.
    SQL> commit;
    Commit complete.
    SQL> declare
      2
      3    v_img_name     varchar2(260) := 'odie.jpg';
      4    v_metadata_id  character_metadata.character_id%type;
      5    res            boolean;
      6
      7  begin
      8
      9    /* Create the resource from the image file*/
    10    res := dbms_xdb.CreateResource('/ComicBooks/Characters/' || v_img_name, bfilename('TEST_DIR', v_img_name));
    11
    12    /* Create the metadata in the dedicated table */
    13    insert into character_metadata (character_id, character_name, origin, category)
    14    values(1, 'Odie', 'Garfield', 'Dog')
    15    returning character_id into v_metadata_id;
    16
    17    /* Add the pointer in the resource as user-defined metadata (non schema-based) */
    18    dbms_xdb.appendResourceMetadata(
    19      '/ComicBooks/Characters/' || v_img_name
    20    , xmltype( '<cm:CharacterMetadata xmlns:cm="http://mycompany.com/ComicBooks/Characters"><cm:id>' ||
    21               to_char(v_metadata_id) ||
    22               '</cm:id></cm:CharacterMetadata>' )
    23    );
    24
    25  end;
    26  /
    PL/SQL procedure successfully completed.
    SQL> commit;
    Commit complete.A possible query would look like :
    SQL> select cm.*
      2       , x.character_pic
      3  from resource_view v
      4     , xmltable(
      5         xmlnamespaces(
      6           'http://mycompany.com/ComicBooks/Characters' as "cm"
      7         , default 'http://xmlns.oracle.com/xdb/XDBResource.xsd'
      8         )
      9       , '/Resource'
    10         passing v.res
    11         columns metadata_id    number path 'cm:CharacterMetadata/cm:id'
    12               , character_pic  blob   path 'XMLLob'
    13       ) x
    14     , character_metadata cm
    15  where under_path(v.res, '/ComicBooks/Characters') = 1
    16  and cm.character_id = x.metadata_id
    17  ;
    CHARACTER_ID CHARACTER_NAME  ORIGIN          CATEGORY        CHARACTER_PIC
               1 Odie            Garfield        Dog             FFD8FFE000104A4649460001010000
                                                                 0100010000FFDB0084000906061412
                                                                 111414121416141514171717161718
                                                                 1815181D17171617151816151A1718
                                                                 1C261E1719231918141F2F2223272A
                                                                 2C2C2C161EThe image content is retrieved as a BLOB, along with its addtional data.

  • Where to search for a specific Dimention related data

    Hi,
    I guess, hyperion planning store the dimention related data( parent, child, uda,attributes, consolidation operator, data storage etc) in some relational tables of that planning application. Can anybody help me understand where & how those data is stored and what are the table name I should look for a particular dimention related data?
    Actually I need to look into the planning RDBMS table to get the membernames of one particular dimention and search another huge Oracle database to search for those and retrieve the relevant data writing a query. I am using Planning ver9.3.1
    Please revert back for any clarification.
    Regards.

    Hi,
    Take a look at below tables in your application repository schema (db), they are all linked through id fields and they include dimensional infromation.
    HSP_OBJECT
    HSP_OBJECT_TYPE
    HSP_DIMENSION
    HSP_MEMBER
    HSP_ALIAS
    You get detail information from HSP_OBJECT. HSP_OBJECT keeps entire details for entire metadata. Other tables will help you understand the relations, positions etc.
    Cheers,
    Alp

  • Is it possible to integrate relational data with OLAP cubes?

    I have a web application that accesses cubes created from AWM via the OLAP API. I need to integrate a column from a relational table in the front application and display the column along side cube data.
    Is there any way to achieve the functionality from the OLAP API?

    Can you explain how the relational data source relates to the OLAP data, is it a master-detail relationship? If this is the case then you could consider the following:
    1) Depending on how you are displaying the OLAP data. If you are using a non-BI Beans presentation bean then if the keys are consistent across both data sources it should be possible to create two separate queries and glue them together using the common keys within your data source module.
    2) Alternatively, you create a custom text measure within AWM and then use OLAP DML to extract the detail data and load it into a multi-line text variable that could be retrieved via OLAPI. This might not work if there is a large number of rows within the text variable to retrieve as formatting the results within your application might get complicated. The OLAP DML Help contains a lot of excellent examples that will help you create a program that uses SQL commands to load data.
    Hope this helps
    Keith Laker
    Oracle EMEA Consulting
    BI Blog: http://oraclebi.blogspot.com/
    DM Blog: http://oracledmt.blogspot.com/
    BI on Oracle: http://www.oracle.com/bi/
    BI on OTN: http://www.oracle.com/technology/products/bi/
    BI Samples: http://www.oracle.com/technology/products/bi/samples/

  • Where the scheduled (recurring report) related data saved in OBIEE?

    Suppose I scheduled the report for daily for one month at 8.30 pm. in Oracle BI Publisher.
    I want to know at which location this all report recurring related data saved in OBIEE toll.

    hi,,
    It will be store in following tables
    XMLP_SCHED_OUTPUT,XMLP_SCHED_JOB,XMLP_SCHED_SUBSCRIPTION
    DB Instance: Where you configured the details for publisher scheduler (Admin-->Scheduler Configuration)
    thanks,
    saichand.v

  • Hi,master  data tables and SID's

    Hi,
    Where and how we will find the master data tables and SID tables?I am going to se16 and checking there but i am not able to see there in which table these are located or stored?
    Thank you,
    Sekhar..

    >
    chandra sekhar wrote:
    > Hi,
    > Where and how we will find the master data tables and SID tables?I am going to se16 and checking there but i am not able to see there in which table these are located or stored?
    > Thank you,
    > Sekhar..
    Assuming the infoObject name is ZINFOOBJECT.
    The tables are :-
    /BIC/MZINFOOBJECT       View of Master Data  Tables: Characteristic
    /BIC/PZINFOOBJECT       Master Data (Time-Ind.): Characteristic
    /BIC/RZINFOOBJECT        View SIDs and Char.  Values: Characteristic
    /BIC/SZINFOOBJECT       Master Data IDs: InfoObject
    /BI0/HZINFOOBJECT                 Hierarchy: InfoObject
    /BI0/IZINFOOBJECT                 SID Structure of Hierarchies: InfoObject
    /BI0/KZINFOOBJECT        Conversion of Hierarchy Nodes - SID: InfoObject
    /BI0/TZINFOOBJECT        Texts: Char.
    /BI0/XZINFOOBJECT       Attribute SID Table: InfoObject
    /BI0/ZZINFOOBJECT       View Hierarchy SIDs and Nodes: Char.

  • How to create internal table storing instances of ABAP class

    Hi experts, any one knows how to create internal table storing instances of ABAP class or alternative to implement such function?

    Hi
    Please see below example from ABAPDOCU, this might help you.
    Internal Table cnt_tab is used to store class objects.
    Regards,
    Vishal
    REPORT demo_objects_references.
    CLASS counter DEFINITION.
      PUBLIC SECTION.
        METHODS: set IMPORTING value(set_value) TYPE i,
                 increment,
                 get EXPORTING value(get_value) TYPE i.
      PRIVATE SECTION.
        DATA count TYPE i.
    ENDCLASS.
    CLASS counter IMPLEMENTATION.
      METHOD set.
        count = set_value.
      ENDMETHOD.
      METHOD increment.
        ADD 1 TO count.
      ENDMETHOD.
      METHOD get.
        get_value = count.
      ENDMETHOD.
    ENDCLASS.
    DATA: cnt_1 TYPE REF TO counter,
          cnt_2 TYPE REF TO counter,
          cnt_3 TYPE REF TO counter,
          cnt_tab TYPE TABLE OF REF TO counter.
    DATA number TYPE i.
    START-OF-SELECTION.
      CREATE OBJECT: cnt_1,
                     cnt_2.
      MOVE cnt_2 TO cnt_3.
      CLEAR cnt_2.
      cnt_3 = cnt_1.
      CLEAR cnt_3.
      APPEND cnt_1 TO cnt_tab.
      CREATE OBJECT: cnt_2,
                     cnt_3.
      APPEND: cnt_2 TO cnt_tab,
              cnt_3 TO cnt_tab.
      CALL METHOD cnt_1->set EXPORTING set_value = 1.
      CALL METHOD cnt_2->set EXPORTING set_value = 10.
      CALL METHOD cnt_3->set EXPORTING set_value = 100.
      DO 3 TIMES.
        CALL METHOD: cnt_1->increment,
                     cnt_2->increment,
                     cnt_3->increment.
      ENDDO.
      LOOP AT cnt_tab INTO cnt_1.
        CALL METHOD cnt_1->get IMPORTING get_value = number.
        WRITE / number.
      ENDLOOP.

  • Building XML from relational data

    I need to create XML documents from relational data that conforms to an XML schema document. We need to store and query the XML documents we create.
    Therefore, we have decided to store the XML using the object model rather than as a CLOB
    i.e. register the schema with the XML DB which in turn creates the tables and object types that represent the schema.
    What is the best way build the XML document from the relational data ??
    Can I pass the relational data to the default constructors of the object types created by the schema registration ?? (and then create an XML document from the top level object instance using SYS_XMLGEN)
    Or should I build the xml from strings using XMLELEMENT,XMLFOREST built-in's when retrieving the data from the relational tables ?? OR use the XMLDOM package to build the document ??
    Would appreciate any advice on the best approach.

    There are basically two ways to join your document fragments with SQLX:
    1/. With xmlForest
    e.g. SELECT XMLELEMENT ( "Emp",XMLForest(e.employee_id, e.lname, e.salary)) AS "result"
    FROM employees e WHERE employee_id > 1500 ;
    result
    <Emp>
    < employee_id >1769</ employee_id >
    < lname >Smith</ lname >
    < salary >200000</ salary >
    </Emp>
    2/. With nested invocations of xmlElement
    e.g. SELECT XMLELEMENT("Emp", XMLELEMENT("name", e.fname ||' '|| e.lname),
    XMLELEMENT ( "hiredate", e.hire)) AS "result"
    FROM employees e WHERE employee_id > 200 ;
    result
    <Emp>
    <name>John Smith</name>
    <hiredate>2000-05-24</hiredate>
    </Emp>
    <Emp>
    <name>Mary Martin</name>
    <hiredate>1996-02-01</hiredate>
    </Emp>

Maybe you are looking for