Oracle BI Mind Confusing Questions

Hi,
I am new to Oracle BI. I have been reading documents, tutorials, oracle by example guides and watching some videos.
But I cannot answer some questions which are confusing my mind.
1-) Why we have 3 repository layer? In the tutorials, they are dragging objects from physical layer to business model and mapping layer. In this layer they are renaming table names. Then they are dragging objects from Business Model to Presentation Layer. In this layer they are renaming table names again and they are deleting some columns. I think that we can use just physical layer and presentation layer. In presentation layer we can rename table names and remove some columns.
2-) In physical layer, we are importing objects from data sources. We can import tables, views and foreign keys. What is the best practice for designing business model?
I created a test repository. I imported tables, views and foreign keys from database. But when I try to check consistency ( afte preparing presentation layer), I took error messages about self joins in the physical layer. Can I solve self join problem?
3-) Should I import only tables and views from database in physical layer? I think that if I do not ceate joins manually after import operations, Oracle BI Server may not prepare correct sql statements.
We have a big database (maybe 500 tables), so if I dont import foreign keys, manually creating foreign keys will be a massive manual process. I also do not know which foreign keys are mandatory for a well designed business model?
4-) When database tables changed( for. ex. added new column), are these changes automatically updated to the physical layer?
Thank you..
Edited by: user4030266 on Mar 24, 2011 1:58 PM

3-) Should I import only tables and views from database in physical layer? Yes you need to import tables, you can create views on those tables if you want in physcial layer.
I think that if I do not ceate joins manually after import operations, Oracle BI Server may not prepare correct sql >statements.Yes you need to give proper joins between fact and dimension tables in physical and bmm layers.
We have a big database (maybe 500 tables), so if I dont import foreign keys, manually creating foreign keys will be a massive manual process. I also do not know which foreign keys are >mandatory for a well designed business model?Need to do datamodeling(ETL process first).Convert OLTP to OLAP(into facts and dimesnions) so that you will come to know regarding table and join conditions.
In BMM layer we write logics,create our own logical tables with logical columns,hierrarchies.We use only columns that we need in this layer and write logics upon them.
Refer : http://www.oraclebidwh.com/2010/10/obiee-bmm-layer-design-principalsbest-practices/
Regards,
Srikanth

Similar Messages

  • Oracle Asset (Functional) Practice Questions for Interviews and Exam

    https://www.createspace.com/3495382
    http://www.amazon.com/Functional-Questions-Interviews-Certification-Examination/dp/1456311581/ref=sr_1_4?ie=UTF8&s=books&qid=1289178586&sr=8-4
    List Price: $29.99
    Add to Cart
    Oracle Asset (Functional) Practice Questions for Interviews and Certification Examination (Release 11i and 12)
    Functional Consultant
    Authored by Erp Gold
    This book contains 150 Oracle Asset Practice Questions for functional consultants. Very useful for interviews and certification examinations.
    Publication Date:
    Oct 26 2010
    ISBN/EAN13:
    1456311581 / 9781456311582
    Page Count:
    94
    Binding Type:
    US Trade Paper
    Trim Size:
    6" x 9"
    Language:
    English
    Color:
    Black and White
    Related Categories:
    Computers / General

    Reported as spam!

  • Oracle Asset (Functional) Practice Questions for Interviews and Certificati

    https://www.createspace.com/3495382
    List Price: $29.99
    Add to Cart
    Oracle Asset (Functional) Practice Questions for Interviews and Certification Examination (Release 11i and 12)
    Functional Consultant
    Authored by Erp Gold
    This book contains 150 Oracle Asset Practice Questions for functional consultants. Very useful for interviews and certification examinations.
    Publication Date:
    Oct 26 2010
    ISBN/EAN13:
    1456311581 / 9781456311582
    Page Count:
    94
    Binding Type:
    US Trade Paper
    Trim Size:
    6" x 9"
    Language:
    English
    Color:
    Black and White
    Related Categories:
    Computers / General

    Reported as spam!

  • Oracle Hot Backup Confusion

    Hi everybody,
    This is regarding a confusions on Oracle Hot-Backup.
    Suppose i have 3 Online Redo Logs and i have put a tablespace in Backup Mode.
    So as a result whatever changes are made,Oracle will start generating redo for the entire block so as to avoid Fractured Blocks.
    Wat my Question is what if,while in the process of doing this all my Online Redo Logs files/members gets filled up how do then Oracle Manage to go ahead and keep recording the Transactions/Changes made to a Certain Block in the Tablespace.
    1.>Does Oracle Perform a Recovery from the Archived Redo Logs once the Tablespace taken out Backup Mode.
    2.>Or is there a reclamation/Re-sizing of On-line redo log file Space from other Segments.
    Thanks & Regards,
    Prosenjit Mukherjee.

    >
    I am not even sure that what the #2 means! Anways!
    Suppose i have 3 Online Redo Logs and i have put a tablespace in Backup Mode.It doesn't matter how many redo log groups you have and AFAIK this has no relation with a tablespace being in the backup mode whatsoever.
    So as a result whatever changes are made,Oracle will start generating redo for the entire block so as to avoid Fractured Blocks.Partially correct! The whole block gets copied only for the first time . For subsequent changes, only the change vectors go into the log buffer and then from there, to the redo log files. This is exactly equivalent to what does happen without a tablespace being not in the backup mode.
    Wat my Question is what if,while in the process of doing this all my Online Redo Logs files/members gets filled up how do then Oracle Manage to go ahead and keep recording the Transactions/Changes made to a Certain Block in the Tablespace.What does this mean exactly? If you fill up the redo logs, log switch will follow making teh LGWR switching from the current redo log group to the next, triggering checkpoint which would make the DBWR write the buffers to the datafile. I am not sure what is the confusion?
    Edit: I apologize , I read this point a little too fast! If all the log files are going to be filled up, database would be hung. There won't be any more transactions allowed as there is no place to write their change vectors anywhere. Sorry , I understood in the first time that you were talking about the standard log switching.
    1.>Does Oracle Perform a Recovery from the Archived Redo Logs once the Datafile is taken out Backup Mode.What recovery? Before the files go into the backup, there is a "global checkpoint" that is performed for them , pushing all of their buffers from the cache to them and then after the checkpoint is freezed for them, yet letting them do all the read/write operations. Once they come out of the backup mode, when the next time a checkpoint is performed, then their number gets synched with the rest of the database. From where the recovery comes into the picture here?
    2.>Or is there a reclamation/Re-sizing of On-line redo log file Space from other Segments.As I said before, I didn't understand at all what this point even means?
    HTH
    Aman....
    Edited by: Aman.... on Oct 24, 2009 9:19 PM added Edit

  • OraCle 9i Version Confusion

    Friends,
    i Downloaded Oracle 9.2.0.1 from Oracle Site, On my PC i have installed Windows 2000 Proferssional with SP4 or Windows XP{ Professional With SP2 now my query  is that version install on my PC or not
    Thanks
    Adnan

    Hello,
    Why this doubt raised in your mind ??? There is no question of not installing on windows 2000 professional with SP4 or Windows XP SP2. It will be installed on both OS without any problem. Are you facing problem in that........
    On Windows 2000 professioanl there must be SP4 installed for Oracle 9i to be installed and that is already there.

  • Grouping in rtf template like oracle reports - a newbie question

    Hello all
    I am new to BI Publisher and have probablly a silly question to ask.
    I have a basic query that returns a flat xml file which looks something like this. it will have multiple rows.
    <ROWSET>
    <ROW num="1">
    <DATE>01-DEC-2007</DATE>
    <PACKAGE>XXX </PACKAGE>
    <DROP_OFF>Hotel1</DROP_OFF>
    <ROOM>1</ROOM>
    <NAME>Test Customer</NAME>
    <PROBLEM_RECORDED>N</PROBLEM_RECORDED>
    <EXCEPTION>1</EXCEPTION>
    </ROW>
    </ROWSET>
    Because i am fairly new to xml i am at a loss trying to work out how i can form a template that will effectively allow grouping at say
    1. Date Level
    2. Package Level
    3.Drop Off level
    4. put all other data in here
    In reports i would just do groups and alter the layout accordingly. Obviously if i had an oracle report version of the sql that generates the xml then i could just generate the xml from the report and i would get the xml i am looking for .
    But I am working with basic sql not reports and am wondering What do I have to do with my xml to get it looking live the grouping I mention above, given all i have to play with is the example xml I included. I am really bamboozled and think i am missing something simple.
    I dont want to have to write multiple queries with different groupings using cast , multiset as I thought one of the benefits of BI Publisher was one query multiple layouts.
    Thanks
    Lisa

    If you haev word plugin installed,
    please follow the documentation and try using that,
    load the xml in the word plugin
    and then select insert table/form and then you can do the drag and drop,
    and group by each fields.
    http://blogs.oracle.com/xmlpublisher/2006/10/30
    http://blogs.oracle.com/xmlpublisher/2007/10/30

  • Oracle 10g XE // Licensing question

    Hi,
    as far as i got till ow, the XE seams to be only limited in 3 ways:
    - max 4GB user data
    - max 1GB RAM actively used
    - max 1 Processor used
    However, this leads to some questions for me:
    1. what is a "processor" defined as ? i this 1 cpu including multi cores or is this just 1 core? (regarding the new Xeons we have e.g. 2 CPUs with 2 Cores each with Hyperthreading makes em 4 Cores on each physical CPU)
    2. can multi XE be used for ditributed DB's ?
    e.g. we have say 2 servers - 1 at company A (production and research) and 1 at company B (part production needed for comp. A). can they be connected together so that the main-DB is at company A and company B only has a part of the DB at company A? e.g. company A hast say 100 tables in their DB but B has only 15 tables to access and stored local for caching ? - if these distributed scenarios are possbiel, could you please point me to the docs for it?
    Best Regards,
    Korbinian

    . what is a "processor" defined as I believe a dual core processor is counted as two CPU (that's certainly how Oracle calculate th elicencing for regular 10g). In the case of XE I think they've frigged the database so it will only use one CPU no matter how many CPUs the server actually has.
    2. can multi XE be used for ditributed DB's ?As far as I'm aware you should be able to do single master replication with XE but not multi-master. But then most sensible people would not want to do multi-master replication anyway. You may find this article by Lewis Cunningham informative. The docs on replication are here.
    Cheers, APC

  • Oracle OLAP cube build question

    Hello,
         I am trying to build a reasonably large cube (around 100
    million rows from the underlying relational fact table). I am using
    Oracle 10g Release 2. The cube has 7 dimensions, the largest of which
    is TIME (6 years of data with the lowest level day). The cube build
    never finishes.
    Apparently it collapses while doing "Auto Solve". I'm assuming this
    means calculating the aggregations for upper levels of the hierarchy
    (although this is not mentioned in any of the documentation I have).
    I have two questions related to this:
    1. Is there a way to keep these aggregations from being performed at
    cube build time on dimensions with a value-based hierarchy? I already
    have the one dimension designated as level-based unchecked in the
    "Summarize To" tab in AW manager (TIME dimension).
    2. Are there any other tips that might help me get this cube built?
    Here is the log from the olapsys.xml_load_log table:
    RECORD_ID LOG_DATE AW XML_MESSAGE
    1. 09-MAR-06 SYS.AWXML 08:18:51 Started Build(Refresh) of APSHELL Analytic Workspace.
    2. 09-MAR-06 SPADMIN.APSHELL 08:18:53 Attached AW APSHELL in RW Mode.
    3. 09-MAR-06 SPADMIN.APSHELL 08:18:53 Started Loading Dimensions.
    4. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members.
    5. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for ACCOUNT.DIMENSION (1 out of 9 Dimensions).
    6. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for ACCOUNT.DIMENSION. Added: 0. No Longer Present: 0.
    7. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for CATEGORY.DIMENSION (2 out of 9 Dimensions).
    8. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for CATEGORY.DIMENSION. Added: 0. No Longer Present: 0.
    9. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for DATASRC.DIMENSION (3 out of 9 Dimensions).10. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for DATASRC.DIMENSION. Added: 0. No Longer Present: 0.
    11. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for ENTITY.DIMENSION (4 out of 9 Dimensions).
    12. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for ENTITY.DIMENSION. Added: 0. No Longer Present: 0.
    13. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for INPT_CURRENCY.DIMENSION (5 out of 9 Dimensions).
    14. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for INPT_CURRENCY.DIMENSION. Added: 0. No Longer Present: 0.
    15. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for INTCO.DIMENSION (6 out of 9 Dimensions).
    16. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for INTCO.DIMENSION. Added: 0. No Longer Present: 0.
    17. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for RATE.DIMENSION (7 out of 9 Dimensions).
    18. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for RATE.DIMENSION. Added: 0. No Longer Present: 0.
    19. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for RPTCURRENCY.DIMENSION (8 out of 9 Dimensions).
    20. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for RPTCURRENCY.DIMENSION. Added: 0. No Longer Present: 0.
    21. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for TIME.DIMENSION (9 out of 9 Dimensions).
    22. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Finished Loading Members for TIME.DIMENSION. Added: 0. No Longer Present: 0.
    23. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Finished Loading Dimension Members.
    24. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Started Loading Hierarchies.
    25. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Started Loading Hierarchies for ACCOUNT.DIMENSION (1 out of 9 Dimensions).
    26. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Finished Loading Hierarchies for ACCOUNT.DIMENSION. 1 hierarchy(s) ACCOUNT_HIERARCHY Processed.
    27. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Started Loading Hierarchies for CATEGORY.DIMENSION (2 out of 9 Dimensions).
    28. 09-MAR-06 SPADMIN.APSHELL 08:18:56 Finished Loading Hierarchies for CATEGORY.DIMENSION. 1 hierarchy(s) CATEGORY_HIERARCHY Processed.
    29. 09-MAR-06 SPADMIN.APSHELL 08:18:56 Started Loading Hierarchies for DATASRC.DIMENSION (3 out of 9 Dimensions).
    30. 09-MAR-06 SPADMIN.APSHELL 08:18:56 Finished Loading Hierarchies for DATASRC.DIMENSION. 1 hierarchy(s) DATASRC_HIER Processed.
    31. 09-MAR-06 SPADMIN.APSHELL 08:18:56 Started Loading Hierarchies for ENTITY.DIMENSION (4 out of 9 Dimensions).
    32. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for ENTITY.DIMENSION. 2 hierarchy(s) ENTITY_HIERARCHY1, ENTITY_HIERARCHY2 Processed.
    34. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for INPT_CURRENCY.DIMENSION. No hierarchy(s) Processed.
    36. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for INTCO.DIMENSION. 1 hierarchy(s) INTCO_HIERARCHY Processed.
    37. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Hierarchies for RATE.DIMENSION (7 out of 9 Dimensions).
    38. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for RATE.DIMENSION. No hierarchy(s) Processed.
    39. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Hierarchies for RPTCURRENCY.DIMENSION (8 out of 9 Dimensions).
    40. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for RPTCURRENCY.DIMENSION. No hierarchy(s) Processed.
    41. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Hierarchies for TIME.DIMENSION (9 out of 9 Dimensions).
    42. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for TIME.DIMENSION. 2 hierarchy(s) CALENDAR, FISCAL_CALENDAR Processed.
    43. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies.
    44. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes.
    45. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for ACCOUNT.DIMENSION (1 out of 9 Dimensions).
    46. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for ACCOUNT.DIMENSION. 6 attribute(s) ACCTYPE, CALC, FORMAT, LONG_DESCRIPTION, RATETYPE, SCALING Processed.
    47. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for CATEGORY.DIMENSION (2 out of 9 Dimensions).
    48. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for CATEGORY.DIMENSION. 2 attribute(s) CALC, LONG_DESCRIPTION Processed.
    49. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for DATASRC.DIMENSION (3 out of 9 Dimensions). 50. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for DATASRC.DIMENSION. 3 attribute(s) CURRENCY, INTCO, LONG_DESCRIPTION Processed.
    51. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for ENTITY.DIMENSION (4 out of 9 Dimensions).
    52. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for ENTITY.DIMENSION. 3 attribute(s) CALC, CURRENCY, LONG_DESCRIPTION Processed.
    53. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for INPT_CURRENCY.DIMENSION (5 out of 9 Dimensions).
    54. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for INPT_CURRENCY.DIMENSION. 2 attribute(s) LONG_DESCRIPTION, REPORTING Processed.
    55. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for INTCO.DIMENSION (6 out of 9 Dimensions).
    56. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for INTCO.DIMENSION. 2 attribute(s) ENTITY, LONG_DESCRIPTION Processed.
    57. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for RATE.DIMENSION (7 out of 9 Dimensions).
    58. 09-MAR-06 SPADMIN.APSHELL 08:18:58 Finished Loading Attributes for RATE.DIMENSION. 1 attribute(s) LONG_DESCRIPTION Processed.
    59. 09-MAR-06 SPADMIN.APSHELL 08:18:58 Started Loading Attributes for RPTCURRENCY.DIMENSION (8 out of 9 Dimensions).
    60. 09-MAR-06 SPADMIN.APSHELL 08:18:58 Finished Loading Attributes for RPTCURRENCY.DIMENSION. 2 attribute(s) LONG_DESCRIPTION, REPORTING Processed.
    61. 09-MAR-06 SPADMIN.APSHELL 08:18:58 Started Loading Attributes for TIME.DIMENSION (9 out of 9 Dimensions).
    62. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Finished Loading Attributes for TIME.DIMENSION. 3 attribute(s) END_DATE, LONG_DESCRIPTION, TIME_SPAN Processed.
    63. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Finished Loading Attributes.
    64. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Finished Loading Dimensions.
    65. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Started Updating Partitions.
    66. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Finished Updating Partitions.
    67. 09-MAR-06 SPADMIN.APSHELL 08:20:40 Started Loading Measures.
    68. 09-MAR-06 SPADMIN.APSHELL 08:20:40 Started Load of Measures: SIGNEDDATA from Cube FINANCE.CUBE.
    69. 09-MAR-06 SPADMIN.APSHELL 10:54:06 Finished Load of Measures: SIGNEDDATA from Cube FINANCE.CUBE. Processed 100000001 Records. Rejected 0 Records.
    70. 09-MAR-06 SPADMIN.APSHELL 10:54:06 Started Auto Solve for Measures: SIGNEDDATA from Cube FINANCE.CUBE.

    Hi, I've taken a few minutes to do a quick analysis. I just saw in your post that this isn't "real data", but some type of sample. Here is what I'm seeing. First off, this is the strangest dataset I've ever seen. With the exception of TIME, DATASOURCE, and RPTCURRENCY, every single other dimension is nearly 100% dense. Quite truthfully, in a cube of this many dimensions, I have never seen data be 100% dense like this (usually with this many dimensions its more around the .01% dense max, usually even lower than that). Is it possible that the way you generated the test data would have caused this to happen?
    If so, I would strongly encourage you to go back to your "real" data and run the same queries and post results. I think that "real" data will produce a much different profile than what we're seeing here.
    If you really do want to try and aggregate this dataset, I'd do the following:
    1. Drop any dimension that doesn't add analytic value
    Report currency is an obvious choice for this - if every record has exactly the same value, then it adds no additional information (but increases the size of the data)
    Also, data source falls into the same category. However, I'd add one more question / comment with data source - even if all 3 values DID show up in the data, does knowing the data source provide any analytical capabilities? I.e. would a business person make a different decision based on whether the data is coming from system A vs. system B vs. system C?
    2. Make sure all remaining dimensions except TIME are DENSE, not sparse. I'd probably define the cube with this order:
    Account...........dense
    Entity..............dense
    IntCo...............dense
    Category.........dense
    Time...............sparse
    3. Since time is level based (and sparse), I'd set it to only aggregate at the day and month levels (i.e. let quarter and year be calculated on the fly)
    4. Are there really no "levels" in the dimensions like Entity? Usually companies define those with very rigid hierarchies (assuming this means legal entity)
    Good luck with loading this cube. Please let us know how "real" this data is. I suspect with that many dimensions that the "real" data will be VERY sparse, not dense like this sample is, in which case some of the sparsity handling functionality would make a huge benefit for you. As is, with the data being nearly 100% dense, turning on sparsity for any dimension other than TIME probably kills your performance.
    Let us know what you think!
    Thanks,
    Scott

  • Oracle Pl/Sql table question..

    Hi guys,
    I have an Oracle PL/SQL question. I don't
    know which board can i ask it. This is why
    i am asking here.
    I declare a PL/SQL table(traditional array),
    and populate it in a session. Another user logged on with another session and wanted to access this PL/SQL table data, But He/she received "No Data Found".
    Is the PL/SQL table session scope?
    Pl/Sql table declared in Package Spec
    and the procedures to populate and
    access the data in/from this Pl/Sql table
    are coded in package body.
    Thanks...Ali
    null

    Hi guys,
    I have an Oracle PL/SQL question. I don't
    know which board can i ask it. This is why
    i am asking here.
    I declare a PL/SQL table(traditional array),
    and populate it in a session. Another user logged on with another session and wanted to access this PL/SQL table data, But He/she received "No Data Found".
    Is the PL/SQL table session scope?
    Pl/Sql table declared in Package Spec
    and the procedures to populate and
    access the data in/from this Pl/Sql table
    are coded in package body.
    Thanks...Ali
    null

  • LDAP syncronization with Oracle DB. Related questions.

    Hello everybody,
    The problem / objective: I have an Oracle DB with information for clients (ie). And I want to store that information in an LDAP server and have that information syncronized with the Oracle DB.
    I alredy read A LOT about this but no luck. I have tried ApacheDS and Microsoft AD for the LDAP servers. I was able to install both of them and I was able to create a trigger in my OracleDB in order to trigger actions and add/delete/update records on the LDAP server. But, I need that the trigger works both ways. So I need a trigger in ApacheDS or AD. And here is the problem. This is not supported.
    While reading about this, I found information about Oracle Internet Directory. So, here is my first questions:
    1- Is OID an Ldap Server? I believe it is, but, is like Apache DS or AD? (In other words, is like a service which I can connect and implement CRUD operations?)
    2- OID is just supported/deplyed with Oracle 11g?
    3- Can I synchronize my Oracle DB with OID? I mean, if a change is made in the Oracle DB then this change is implemented on the OID, and backwards. (like triggers, in both ways)
    4- If OID is like AD, then, is this the best LDAP server to use?
    And talking about Microsoft Active Directory (AD), how can I achieve this?. I read about third party tools to do this (Quest Quick Connect, Microsoft Forefront Identity Manager), but I want to find another solution like triggers. (If is possible, if not.. then is not a solution :) ).
    Probably the questions are not very clear (the problem is clear), but I have a mess in my head now.. and I want some council about this.
    Thanks in advance.

    Hi,
    Firstly I would suggest you to upgrade your database from Oracle Release 11.2.0.1.0 to Oracle Release 11.2.0.2 . This is the recommended Oracle 11g database version  for SAP solutions. Many of your problem will get resolved with it.
    Question 1:
    So my first question would be is there any other suggestions besides adjusting the mentioned parameter above in order to ensure that no work processors going into hang state due to RFCs' occupying it as this issue always happens at the end of the month only when there are massive users accessing it.
    For immediate resolution the approach you have followed is correct viz limiting number of dialog processes for RFC. Secondly you need to analyze why RFC processing takes so much time. You need check which programs are getting executed by those RFC.
    Generate EarlyWatch report for more detailed view
    Question 2:
    My second question is what went wrong with the libttsh11.so file. How could it be 0 size in PRD when no signs of changes had happen to the PRD system. Is this a proven Oracle Bug or something else since I have never encountered anything like this before.
    The libttsh11.so library cannot be found in the related directory.
    Cause
    The file system is mounted using CIO option, but per Note 257338.1 Direct I/O (DIO) and Concurrent I/O (CIO) on AIX 5L, an ORACLE_HOME on a filesystem mounted with "cio" option is not supported.
    Such a configuration will cause, installation, relinking and other unexpected problems.
    Solution
    Disable the CIO option on the filesystem.
    References
    NOTE:257338.1 - Direct I/O (DIO) and Concurrent I/O (CIO) on AIX 5L
    Hope this helps.
    Regards,
    Deepak Kori

  • Oracle VM: some generell questions

    Hello,
    I've went through some documents from Oracle concerning Oracle's VM. But I really didn't find all answers to some of my questions. Therefore I will post this questions here - and I will appreciate if somebody will answer them:
    1.) Ability to switch:
    currently we are using HP server. Each server hosts several VM's. If one physical server will not be available, the VM's can be switched to another physical server.
    Does Oracle's VM have the same ability?
    2.) Reliability/Stability:
    how reliable and stable is Oracle's VM?
    3.) Just a hype for the moment?
    is Oracle's VM a longterm solution and will it get supported and developed for the coming years? If using Oracle's VM we want to be sure to use a product which is just
    recent "right now"
    4.) Documentation:
    I've looked up the link http://www.oracle.com/technologies/linux/index.html - are there some other links with some further documentation?
    Rgds
    JH
    Edited by: VivaLaVida on Aug 26, 2010 12:56 PM

    VivaLaVida wrote:
    currently we are using HP server. Each server hosts several VM's. If one physical server will not be available, the VM's can be switched to another physical server.
    Does Oracle's VM have the same ability?Yes.
    2.) Reliability/Stability:
    how reliable and stable is Oracle's VM? Very.
    3.) Just a hype for the moment?
    is Oracle's VM a longterm solution and will it get supported and developed for the coming years? If using Oracle's VM we want to be sure to use a product which is just
    recent "right now"It's one of our core products. As you may have noticed, we have even rebranded VirtualBox to become Oracle VM VirtualBox and Solaris LDoms are now Oracle VM for SPARC.
    4.) Documentation:
    I've looked up the link http://www.oracle.com/technologies/linux/index.html - are there some other links with some further documentation?
    The full Oracle VM for x86 documentation library is here: http://download.oracle.com/docs/cd/E15458_01/index.htm

  • Oracle Patch 19 11G question

    Thank You for taking my question! I am applying PSU patch 19 for 11.1.0.7 on windows 2008.
    I applied the patch successfully but the very last statement (3.3.8) shown below has me puzzled.
    I thought any applied patches would automatically get applied to any new databases instances in my patched oracle home. Meaning, I could apply the patch and catcpu.sql to TEST instances, delete the database TEST instance, create a new database instances called TEST2 and not have to apply any catcpu.sql patch file to TEST2.
    Is this cpu patch unique or is this normal?
    Thanks very much for your comments!
    Kathie
    3.3.8 Post Installation Instructions for Databases Created or Upgraded after Installation of Bundle Patch19 in the Oracle Home
    These instructions are for both RAC environments and non-RAC environments when a database is created or upgraded after the installation of Bundle Patch19.
    You must execute the steps in Section 3.3.7.1, "Loading Modified .sql Files into the Database" and Section 3.3.7.2, "Recompiling Views in the Database" for any new database that was created by any of the following methods:
    •     Using DBCA (Database Configuration Assistant) to select a sample database (General, Data Warehouse, Transaction Processing)
    •     Using a script that was created by DBCA that creates a database from a sample database
    •     Cloning a database that was created by either of the two preceding methods, and if Section 3.3.7.1, "Loading Modified .sql Files into the Database" was not executed after Bundle Patch19 was applied
    Upgraded databases require that you perform the steps in Section 3.3.7.1, "Loading Modified .sql Files into the Database" and Section 3.3.7.2, "Recompiling Views in the Database" if these steps have not previously been performed; otherwise, no post-installation steps need to be performed

    Thanks for the comments!
    So to clarify: if I applied psu patch 18 and 19 and then create a new instance, I would need to only apply catcpu.sql from patch 19 to the new instance?
    Thanks!
    Kathie

  • Oracle VM server RAM question

    I am running two vm servers both with 16GB of physical memory installed. When I go in to check the memory specs for the servers using "free -m" or "cat /proc/meminfo" it only show a total of 577MB of RAM. Now I know this can not be true because currently we are running two guest CentOS machines and one Windows Server 2003 guest machine on one of the servers all with 2GB virtual memory. These machines do not seem to have any performance problems which leads me to suspect that the VM server is not reporting the RAM that is physically installed but rather it is reporting the total memory that it is using. I do not think it would even be possible to run two CentOS machines and one Windows Server 2003 with only 577MB of ram. Can anyone confirm that the 577MB I am seeing in the VM server is just what it is using? Thanks

    I do see a few lines similar to this
    "title Oracle VM Server-ovs (xen-3.4.0 2.6.18-128.2.1.4.37.el5ovs)
    root (hd0,0)
    kernel /xen-32bit.gz dom0_mem=564M
    module /vmlinuz-2.6.18-128.2.1.4.37.el5xen ro root=UUID=ee6eb40e-1c39-473b-b670-7131959cc839
    module /initrd-2.6.18-128.2.1.4.37.el5xen.img"
    You have answered my question. I just wanted to make sure that it was normal to not see the full physical memory installed. Thanks
    Edited by: Paul_RealityTech on Oct 24, 2011 12:05 PM

  • Oracle 10gR2 RAC - ASM question

    Hi
    I have a question regarding the ASM storage. Let says I have a system here running Oracle 10gR2 RAC and would like to add a new/extend the current DATA disk group with more disk space. How do I do that? will it affect the existing data stored inside there?

    So to add a little more to the discussion. Let's say your storage administrator presents you a LUN and is nice enough to add a partition of say 7G. (/dev/sdo1).
    Now you need to take /dev/sdo1 stamp it and alter your storage group.
    For illustration purposes I shall use rac1 and rac2 as my dual instance RAC and add to the asm group ARCH.
    As root on rac1
    /etc/init.d/oracleasm createdisk ARCH2 /dev/sdo1
    then run
    /etc/init.d/oracleasm listdisks
    to make sure ARCH2 shows up.
    On rac2 you run
    /etc/init.d/oracleasm listdisks
    You don't see ARCH2 so then run
    /etc/init.d/oracleasm scandisks
    then
    /etc/init.d/oracleasm listdisks
    Now you should see ARCH2
    Ok the asm stamps are in sync now.
    Back to rac1
    su - oracle
    set ORACLE_SID to asm instance and use sqlplus
    sqlplus / as sysasm
    If you query V$ASM_DISK you will see your disk with a header_status of PROVISIONED
    that's good ...
    NOw while still in sqlplus
    Let's bump up the asm_power_limit so rebalancing runs faster
    alter system set asm_power_limit=5 scope=both ;
    If your asm instance are sharing the same spfile you only need do this on one instance; otherwise run the command both on all asm instances.
    Lastly
    ALTER DISKGROUP ARCH ADD DISK 'ORCL:ARCH2' ;
    Now you can query V$ASM_OPERATION and watch ASM do it's magic of rebalancing.
    That's it. All done while the DB is up and running.
    How does that work for you?
    -JR jr

  • Oracle 10gR2 Dataguard quick question

    Hi -
    Just a quick question about Oracle 10gR2 Dataguard. I'm in the process of creating dataguard standby, which is running for few hours and could take few more hours because of the size and the standby's network latency, and i'm using OMS to create the DG. I'm in a situation now to create new tablespace(A) and also one more datafile to the existing tablespace(B).
    Question is: How would this new tablespace and the new datafile in the existing tablespace affect the DG standby which is about to finish in couple hours? Will the DG pickup the new changes from the primary? Or, will it be done with error from the mismatch of the number of files? The issue is that i can't wait for the standby creation to be done as the additional tablespace and datafile requirement is production critical, and should be added right away.
    note: standby_file_management is set to auto.
    Thanks for your response.
    regards.

    Your post is a bit vague, as OMS is an acronym for Oracle Management Server, which is a service only.
    If you would have stated you are using Database Control, or RMAN duplicate database, the picture would have been much clearer.
    Database Control uses rman duplicate database.
    Recovery is the mandatory implicit last step of this procedure to pick up all changes since you started the duplicate.
    One word of warning: Network latency is one thing to avoid like hell in a standby configuration.
    It might even slow down your production database.
    Sybrand Bakker
    Senior Oracle DBA

Maybe you are looking for