Processing oracle relational data with hadoop

hello every body:
i would like if it possible to process oracle relational data with hadoop in order to get better performance
native parallel processing in oracle VS hadoop processing? who is the better?
how can we do it?
thank you at advance

Hello,
This is the Oracle NoSQL Database forum. You asked about Oracle Database so you may want to try that forum instead of this one. I suggest that you ask about "In-Database Map/Reduce".
Charles

Similar Messages

  • Need Suggestion to Stage Process Order related data

    Hi All,
    Could anybody help me by providing the solution or suggestion for the problem which I am describing here. The problem is like to stage the process order related data (which are downloaded from SAP ECC for sending to machine data base or confirmed by machine data base which will be sent to SAP ECC) can we use NetWeaver Data Base by creating separate tables or by creating separate schema in NetWeaver DB or we will go for separate DB to stage all transactional data of Process Order.
    Thanks in Advance.
    Chandan

    Hello Chandan,
    1) Yes you can use underlying Netweaver DB either by creating new tables or by creating new schema but as per MII best practices it is Not Recommended because by mistake you might end up affecting the NW and MII configurations
    2) There is no hard and fast rule saying you must stage Data but it is very good if you do it because,
                           a) Your data will be buffered when SAP is down
                           b) Faster processing
                           c) Ability to perform more analysis on data through many drill downs to sub-levels
    3) I would recommend going for a separate DB to stage all your SAP Data (both for sending or receiving data from ECC)
    Hope this helps!!
    Regards,
    Adarsh

  • Is it possible to integrate relational data with OLAP cubes?

    I have a web application that accesses cubes created from AWM via the OLAP API. I need to integrate a column from a relational table in the front application and display the column along side cube data.
    Is there any way to achieve the functionality from the OLAP API?

    Can you explain how the relational data source relates to the OLAP data, is it a master-detail relationship? If this is the case then you could consider the following:
    1) Depending on how you are displaying the OLAP data. If you are using a non-BI Beans presentation bean then if the keys are consistent across both data sources it should be possible to create two separate queries and glue them together using the common keys within your data source module.
    2) Alternatively, you create a custom text measure within AWM and then use OLAP DML to extract the detail data and load it into a multi-line text variable that could be retrieved via OLAPI. This might not work if there is a large number of rows within the text variable to retrieve as formatting the results within your application might get complicated. The OLAP DML Help contains a lot of excellent examples that will help you create a program that uses SQL commands to load data.
    Hope this helps
    Keith Laker
    Oracle EMEA Consulting
    BI Blog: http://oraclebi.blogspot.com/
    DM Blog: http://oracledmt.blogspot.com/
    BI on Oracle: http://www.oracle.com/bi/
    BI on OTN: http://www.oracle.com/technology/products/bi/
    BI Samples: http://www.oracle.com/technology/products/bi/samples/

  • Interfacing Oracle spatial data with ArcView 9

    Hello,
    I'm trying to take my spatial data in Oracle which has SDO_GEOMETRY fields, and have it display in ArcView. I know I have to do something with SDO_GEOMETRY, but I'm not sure how. I've been reading that 3rd party tools can be used. Is this the only way, or is there something else I can do?
    Thanks much,
    Nora

    the way i use is through arcsde. register the oracle table with sde and then you can view it in any arc software.
    there is another way through direct connect. i havent used it but you can find some help on esri website.
    V

  • Error Occuring While Processing Data With DIAdem 8.1Excepti​on EAccessVio​lation in module ntdll.dll at 000111DE. Access violation at address 7C9111DE in module 'ntdll.dll​". Read address 37363430

    Hello,
    We are having an issue running Diadem 8.1 on a new HP XW 9400 with Windows XP SP2. 3 errors have been occuring with frequent crashes, they are:
    1) "1Exception EAccessViolation in module ntdll.dll at 000111DE. Access violation at address 7C9111DE in module 'ntdll.dll". Read address 37363430.
    2) The instruction at "0x7c9111de" referenced memory at "0x352e302d". The memory could not be "read".
     3) ---Error---   DIAdem
    Error in Autosequence - processing in line: 74 (IARV_VAR_GET)
    Runtime Error while executing command "Iarv2Txt$ := FR(T9,L1)"
    Error type: ACCESS VIOLATION
    Error address: 000101DE
    Module name:ntdll.dll
    We are using the same scripts and version 8.1 on a variety of Dell desktop computers (W2K and XP SP2) without any issue, looking for suggestions as this affects no other software on the HP XW 9400 other than DIAdem. Appreciate any suggestions.
    Message Edited by swillh on 06-25-2007 08:55 AM

    Christian,
    I will answer your questions in the text below. Thanks for your help.
    Hi swillh,
    I also would like to help you.
    Unfortunately, The reported access violation in the central Windows ntdll.dll is very unspecific.
    May be the following questions will help you to provide me more info.
    1. You mentioned that the aut's and - I think - also the accessed text file reside on a server.
    Is there a stable network connection?
    The server can sometimes be a little slow but the connection is good. This computer is using the same connection that the prior computer utilized without issue.
    Are the files accessed by multiple clients simultaneously?
    It is possible that more than one computer can be accessing the same file, but again, this has never been an issue. We are reading the files only, not writing to them.
    The processing routines we are running have used for over 5 years without any issues until adding this computer.
    Is the text file read by one client while another client is writing the same file?
    No, files are "read only"
    2. Have you already tested opening the file with the FileOpen command before calling FR?
    Yes
    Do you see any chance to convert the aut to a vbs file? This gives you more alternatives in accessing text files.
    3. What do you mean with "processing ATD's with 30 or more channels of data"? Where is the relation between ATD files and data channels?
    Processing Crash Dummy data with file sets low in channel count (15 channels) result in successful processing without any crashes or access errors.  When processing dummies with more than 20 channels we sometimes encounter these issues. The higher channel count may be the only common factor I can find in these faults.
    Steve
    Message Edited by swillh on 07-09-2007 08:44 AM

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • PA_CONTRACT_XSLFO: How to invoke a RTF-template with related data template

    Dear Reader,
    actually I want to extend the standard Document Type Layout for a Purchase Agreement Contract with additional data from approved supplier list (ASL).
    Therefor I have created a RTF-template and a data template with the needed sql-statement. For testing I put this in a standalone concurrent programm and it works fine (result was a blue table with all data rows).
    Next step for me was to invoke the RTF-template into the PA_CONTRACT_XSLFO template for extending the Document Type Layout for my Purchase Agreement Contract. So I put the neede invoke-statements
    <xsl:import href="xdo://XXOC.XX_RTF_TEMPLATE.de.00/"/>
    and
    <xsl:call-template name="XX_RTF_TEMPLATE"/>
    into the XSLFO-template. Also I extend the RTF-template with the define template statement
    <?template:XX_RTF_TEMPLATE?>
    So all seems to be fine.
    As result I get the standard document for Purchase Agreement Contract with the additional blue table from RTF-template BUT WITHOUT DATA !
    From my point of view there is no execution of the sql-statement in data template. But I dont know why.
    Do Oracle support a combination of XSLFO-template with data template?
    [XSLFO-template] with related [XSD-data definition]
    calls [RTF-template] with related [data template (with included sql-statement)]
    Thanks for your help.
    Best regards
    Mario.

    How to call a rtf template from another rtf template by passing a value try in main template create hyperlink of url with parameters for another template
    http://bipconsulting.blogspot.ru/2010/02/drill-down-to-detail-or-another-report.html
    When user pull a quote report from siebel this new rtf template should attach to the quote at the end.it'll be only another report
    IMHO you can not attach it to main. it'll be second independent report
    you can try subtemplate but it's not about rtf from rtf by click
    it's about call automatically rtf subtemplate from main rtf based on some conditions
    for example, main template contain some data and if some condition is true then call subtemplate and place it instead of its condition

  • What is the role of Lns process in oracle 10g data guard

    Hi ,
    plz help me out to find out the actual working of lns process in oracle 10g data guard
    when i use SYNC redo transport
    the output of v$managed_stanbdy is like that ..
    PROCESS PID STATUS CLIENT_PROCESS GR# SEQ#
    ARCH 9258 CLOSING ARCH 2 498
    ARCH 9260 CLOSING ARCH 1 499
    ARCH 9262 CLOSING ARCH 2 496
    ARCH 9264 CLOSING ARCH 1 497
    LGWR 9206 CLOSING LGWR 2 482
    its not display any info about lns,thats means lns is not working in SYNC redo transport mode ?
    but if i changed it to ASYNC then the out put of v$managed_stanbdy is like this ..
    PS PID STS CPS GR# SEQ#
    ARCH 9258 CLOSING ARCH 1 509
    ARCH 9260 CLOSING ARCH 2 510
    ARCH 9262 CLOSING ARCH 1 505
    ARCH 9264 CLOSING ARCH 2 508
    LGWR 9206 CLOSING LGWR 1 503
    LNS 10528 CLOSING LNS 2 510
    Now it display all the info about lns process...
    i read in oracle documentation that lns process send redo data from primary,( through network service ) to RFS on standby side.
    but first output means that lns is not working,if not then which process send redo from primary to RFS on standby ?
    i also read in some blog that lgwr use some extra buffer size from primary db SGA ,to write redo in that buffer ,ans lns read redo from that buffer and send it to RFS on stanby side,
    i m totally confused ..can u plz help me with correct logic behind this .
    thanx in advance.

    Hello,
    On the primary database when you run the v$managed_standby, it shows up the LNS process as this process sends redo info to the standby database and on the standby database the RFS process receives the redo information.
    So on the primary database when you query the v$managed_standby, it shows up LNS and on the standby database when you query the v$managed_standby it shows up RFS. Please let us know where you are running the query.
    Refer this http://datadisk.co.uk/html_docs/oracle_dg/architecture.htm
    969752     
    Handle:     969752
    Status Level:     Newbie
    Registered:     Nov 6, 2012
    Total Posts:     9
    Total Questions:     2 (2 unresolved)
    Name     Hemendra Singh
    Location     NoidaPlease consider closing your questions by providing appropriate points and marking it as answered. Please keep the forum clean !

  • Help Required: Working with relational data

    Hi,
    I'm looking for some advice
    Scenario
    We have a customer who has an SE relational database with Apex for transaction processing. They require reporting functionality and are debating whether to buy BI SE One or BI EE; they will not buy the enteprise edition database though.
    Questions
    Here are my questions:
    1. In order to use answers and dashboards, does the relational data have to be in a dimensional format. Is this reccommended.
    2. We were debating on whether to create data marts for various business requirements. Is this standard procedure for relational data (Which is not too complex)
    3. If the data has to be in a dimensional format, is this best modelled using warehouse builder or the model layer of the repository using BI Administrator. (Baring in mind that the customer is an SME using a SE database)
    The main problem where having, been fairly new to warehouse building, is that we dont know what the best path is to take when working with relational data. Do we create dimensional models? What should we use? Is a relational structure suitable enough?
    Can anyone shed light on our problems
    Regards
    Kevin

    1. In order to use Answers and Dashboards you do not need to have a Data Warehouse. It can report out of both Transactional database as well as a data warehouse.
    2. It depends on what your business requirements are. BI EE/SE1 does not require you to have data marts. Data Marts would give performance gains, would help you in visualizing the transactional data from a business standpoint, is a big plus but not a mandate. If you want to go that route of creating Datamarts, you can easily do that using BI SE1 since that gives you warehouse builder which can be both used as a data modelling tool and a ETL tool.
    3. Warehouse Builder is a ETL tool. It helps in you visualizing/deploying data warehouses in databases. But BI EE Admin tool is more of a logical dimension modelling tool wherein the facts and dimensions do not exist physically but the tool uses them to do the querying/processing.
    Since you are just starting, what you can do is to use BI EE directly on top of the transactional system for the time being. But as a long term solution you can start working on your data marts and as soon as they are ready you can port the BI EE on top of them rather than the transactional system.
    Thanks,
    Venkat
    http://oraclebizint.wordpress.com

  • SAP oracle database crash with errore:ORA-00490: PSP process terminated

    Hi ALL,
    Our oracle database crash with errore code in trace : ORA:00490; i started database again and it  working fine; but could not fine reason of down and what is this erroe all about
    can some help me
    Errors in file /oracle/SRD/saptrace/background/srd_pmon_28096.trc:
    ORA-00490: PSP process terminated with error
    Tue Nov 25 09:00:57 2008
    PMON: terminating instance due to error 490
    Instance terminated by PMON, pid = 28096
    Thanks,
    Dinesh

    Hi stefen,
    please find the trace file as below
    /oracle/SRD/saptrace/background/srd_pmon_28096.trc
    Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production
    With the Partitioning and Data Mining options
    ORACLE_HOME = /oracle/SRD/102_64
    System name:    SunOS
    Node name:      nzlsfn23
    Release:        5.10
    Version:        Generic_137111-01
    Machine:        sun4u
    Instance name: SRD
    Redo thread mounted by this instance: 1
    Oracle process number: 2
    Unix process pid: 28096, image: oracle@nzlsfn23 (PMON)
    2008-11-25 09:00:57.497
    SERVICE NAME:(SYS$BACKGROUND) 2008-11-25 09:00:56.210
    SESSION ID:(24.1) 2008-11-25 09:00:56.194
    Background process PSP0 found dead
    Oracle pid = 6
    OS pid (from detached process) = 28098
    OS pid (from process state) = 28098
    dtp = 38000afd8, proc = 497000860
    Dump of memory from 0x000000038000AFD8 to 0x000000038000B020
    38000AFD0                   00000005 00000000          [........]
    38000AFE0 00000004 97000860 00000000 00000000  [.......`........]
    38000AFF0 00000000 50535030 00020000 00000000  [....PSP0........]
    38000B000 00006DC2 00000000 00000000 48E50DA4  [..m.........H...]
    38000B010 00000001 000E3273 00040081 00000000  [......2s........]
    Dump of memory from 0x0000000497000860 to 0x0000000497001048
    497000860 02010000 00000000 00000000 00000000  [................]
    497000870 00000000 00000000 00000000 00000000  [................]
    497000880 00000004 97042570 00000004 97047810  [......%p......x.]
    497000890 00000004 97028E68 00000004 97045BE8  [.......h......[.]
    4970008A0 00000000 00000000 00000004 97045C70  [..............\p]
    4970008B0 00000004 97045C70 00000004 97047800  [......\p......x.]
    4970008C0 01060000 00000000 00000004 97025ED8  [..............^.]
    4970008D0 00000004 97028E68 00000006 00000000  [.......h........]
    4970008E0 00000000 00000000 00000000 00000000  [................]
    4970008F0 00000000 00000000 00000004 97042490  [..............$.]
    497000900 00000004 970425A0 00000000 00000000  [......%.........]
    497000910 00000000 00000000 00000000 00000000  [................]
            Repeat 3 times
    497000950 00000003 00000000 00000000 00000000  [................]
            Repeat 1 times
    497000970 00000000 00000000 00000000 00000000  [................]
    497000980 00000004 00000000 00000000 00000000  [................]
    497000990 00000003 00000000 00000000 00000000  [................]
            Repeat 1 times
    4970009B0 00000004 00000000 00000000 00000000  [................]
    4970009C0 00000005 00000000 00000000 00000000  [................]
    4970009D0 00000003 00000000 00000000 00000000  [................]
    4970009E0 00000000 00000000 00000000 00000000  [................]
            Repeat 8 times
    497000A70 00000000 00000000 00000004 97000A78  [...............x]
    497000A80 00000004 97000A78 00000000 00000000  [.......x........]
    497000A90 00000000 00000000 00000004 97000A98  [................]
    497000AA0 00000004 97000A98 00000000 00000000  [................]
    497000AB0 00000000 00000000 00000000 00000000  [................]
            Repeat 2 times
    497000AE0 00000000 00000000 00000018 00000030  [...............0]
    497000AF0 00000001 00000B3D 00000004 970037D0  [.......=......7.]
    497000B00 00000004 580096B0 00000001 00000000  [....X...........]
    497000B10 00000000 00000000 00000000 00000000  [................]
            Repeat 2 times
    497000B40 00006DC2 00000000 00000000 00000000  [..m.............]
    497000B50 00000000 00000000 00000000 00000000  [................]
            Repeat 2 times
    497000B80 00000004 97000860 00000000 00000000  [.......`........]
    497000B90 00000000 00000000 00000000 00000000  [................]
            Repeat 7 times
    497000C10 00000004 97000C10 00000004 97000C10  [................]
    497000C20 00000000 00000000 00010000 00000000  [................]
    497000C30 00000000 00000117 0000000A 00000000  [................]
    497000C40 00006DC2 00000000 00000000 48E50DA4  [..m.........H...]
    497000C50 00000001 00000000 00000000 00000000  [................]
    497000C60 00000000 00000000 00000000 00000000  [................]
            Repeat 2 times
    497000C90 00000000 00000000 00000003 FFFFFFFF  [................]
    497000CA0 00000000 00000000 00000000 00000000  [................]
            Repeat 13 times
    497000D80 73726461 646D0000 00000000 00000000  [srdadm..........]
    497000D90 00000000 00000000 00000000 00000000  [................]
    497000DA0 00000000 00000006 6E7A6C73 666E3233  [........nzlsfn23]
    497000DB0 00000000 00000000 00000000 00000000  [................]
            Repeat 2 times
    497000DE0 00000000 00000000 00000000 00000008  [................]
    497000DF0 554E4B4E 4F574E00 00000000 00000000  [UNKNOWN.........]
    497000E00 00000000 00000000 00000000 00000000  [................]
    497000E10 00000000 00000008 32383039 38000000  [........28098...]
    497000E20 00000000 00000000 00000000 00000000  [................]
    497000E30 00000000 00000005 6F726163 6C65406E  [........oracle@n]
    497000E40 7A6C7366 6E323320 28505350 30290000  [zlsfn23 (PSP0)..]
    497000E50 00000000 00000000 00000000 00000000  [................]
    497000E60 00000000 00000000 00000000 00000016  [................]
    497000E70 00000000 00000002 00000000 00000000  [................]
    497000E80 00000000 00000000 00000000 00000000  [................]
            Repeat 8 times
    497000F10 00000000 00000000 00000000 00020000  [................]
    497000F20 00000000 00000000 00000000 00000000  [................]
    497000F30 00000000 00000000 00000003 9E1F6748  [..............gH]
    497000F40 00000004 97001728 00000004 97000758  [.......(.......X]
    497000F50 00000000 00000000 00000003 9E26B5B0  [.............&..]
    497000F60 00000000 00000000 00000000 00000000  [................]
            Repeat 1 times
    497000F80 00000004 97000F80 00000004 97000F80  [................]
    497000F90 00000000 00040000 00000000 00000000  [................]
    497000FA0 00000000 00031A55 00000000 0004D7DD  [.......U........]
    497000FB0 00000000 00071A55 00000000 00000000  [.......U........]
    497000FC0 00000000 00000000 00000000 00000000  [................]
    497000FD0 00000000 00000828 00000000 000000E0  [.......(........]
    497000FE0 00000000 00000828 00000000 00000000  [.......(........]
    497000FF0 00000000 00000000 00000000 00000000  [................]
            Repeat 4 times
    497001040 00000002 00000000                    [........]
    error 490 detected in background process
    ORA-00490: PSP process terminated with error

  • Oracle Secure Enterprise Search integration with Hadoop

    Hello Guys,
    I am currently exploring Oracle SES for performing search across all enterprise information assets. One of my asset is a Hadoop system holding a huge dump of data.
    Does SES have out of box integration with Hadoop (hdfs) or is there any specific connector available that can be used to make the connection and search data.
    I am new to Oracle SES, appreciate if someone could answer this question.
    Thanks,
    Sooraj

    It can be done but it's not easy.
    Federation is done through the Web Services API. To create a custom federation to a non-SES endpoint, you have to replicate most of the Web Services API. You can choose to do it selectively, for example you can choose to implement doOracleSearch but not doOracleAdvancedSearch or doOracleBrowseSearch (names from memory - could be wrong) which would give you the basic search capability but no advanced search or browse against the federated source.
    Then you've got the problem of merging results. When all federated sources are SES based, it's easy to merge them. You know that a document which scores 63 fron one source is more accurate than one that scores 62 from another, and so should appear higher in the hitlist. But if one of those sources is - say - Bling, then how do you that 63 as a score from Bling is comparable to 63 as a score from SES? They're probably not comparable.
    These are the reasons that mention of federation to external sources was removed from the 10.1.8.4 documentation, it was felt that it just wasn't practical for most users.
    A rather better alternative is to use the Suggested Content feature. This is MUCH easier to configure, and doesn't attempt to merge the results. From a user's point of view this may be less desirable than a merged result set, but given that any such merge probably won't work well, it's really better all round.

  • Connection with Hadoop/big data

    Hi,
    how can we connect the HADOOP with the ENDECA STUDIO. Any document please do let me know.
    thanks and regards
    Shashank Nikam

    Hi,
    As far as I know you will need to use Oracle Data Integrator (ODI) or Oracle Big Data connectors
    Some sites:
    Oracle Data Integrator Enterprise Edition 12c | Data Integration | Oracle
    http://docs.oracle.com/cd/E37231_01/doc.20/e36963/concepts.htm#BIGUG107
    http://www.oracle.com/us/products/database/big-data-connectors/overview/index.html

  • How to retrive relational data from an XMLType column in Oracle 10g R2

    Hi
    I want how to retrive the data which is in XML document in an XMLColumn in a Table(or an XMLTable which has the XML Document). This XML Document has to be Queried with XQuery as a Relational data(not an XML Document).
    If any body has some ideas please share it across ASAP.
    please share an example for this because i am new to this XQuery.
    Thanks in Expectation,
    Selva.

    Got it working now. I used the 'extract' function in my select statement, but had to add the .getStringValue() fuction. The extract function, just by itself, returns an XMLDocument type. The call for the column in the SQL statement looked like this.
    extract(XML_CONTENT, '/ROOTOBJECT').getStringVal() xml_content
    Thanks so much for your help. Problem solved!

  • How Oracle to deal with the data lose?-----QNo.104

    In incomplete recovery,some data will be lost. How Oracle to deal with it?
    For example, at 9:00am, you find that it is a mistake to drop a user(you dropped it at 8:30am). But other users' transactions are in progress. A lot of data is input between 8:30 and 9:00. If decide to make a incomplete recovery, does this means all the data input between 8:30 and 9:00 will be lost?
    Message was edited by:
    frank.qian
    Message was edited by:
    frank.qian

    For example, at 9:00am, you find that it is a mistake to drop a user(you dropped it at 8:30am. But the transactions are in progress. To make you clear, you can't drop the user while session connected to this schema.
    I dont think that you would loose any data.
    The other workaround for this issue would be,
    I would clone this on other server just before the user dropped, take the export and then import the user in the production.
    Is this satisfy your question?
    Jaffar

  • Inserting Current Date with time stamp in oracle database

    Hi Experts,
                     I want to insert the current Date and time stamp in a field in the Oracle Database Table.
    I am able to insert date but i am not able to insert the date with time stamp. Any Suggestions??
    Thanks
    Naveen

    Naveen,
    Do you want to get current date (from sysdate) with a specific format or transform a value containing a date/time value to insert it in ORACLE ?
    Usually, you insert current datetimestamp in a date field using this :
    TO_DATE(sysdate,'dd/mm/yyyy hh:mi:ss')
    you may have to tweak the format pattern ('dd/mm....') according to your needs
    if you want to transform a date, use something like this:
    TO_DATE(your_date,your_format)
    but make sure your format is compliant with your date, ie
    TO_DATE('31/12/2008','MM/DD/YYYY') could raise error (litteral does not match) cuz ORACLE can't recognize 31 as a month pattern
    Chris

Maybe you are looking for