EUL To Repository

Does Any One Know The Status of Tool Development For Converting Discoverer EUL to OBIEE?
Thanks
Raghu

I am also keen to know what the status of this is. I went to an Oracle seminar on BI-EE in February 2008 and there was a lot of discussion that the migration path was being tested by Oracle to convert an existing EUL into a repository for use in BI-SEOne or BI-EE but it has all now gone quiet. Does anyone out there know the status of this as I have a number of clients who are keen to move on to the new toolset but have a lot invested in existing EUL/workbooks.

Similar Messages

  • Discoverer with OWB generated EULs

    i configure my warehouse star schema using OWB (warehouse builder 9i release 2) and generated the EUL. the were some warnings but they were pointing to some invalid characters in the names of the objects but the import succeeded.
    i could see all the imported definition from OWB using Discoverer Admin (also iDS Release 2 - EUL version 5).
    sample of my star schema:-
    fact table
    salesman_id
    product_id
    sales_amt
    dimension
    salesman (which also contains salesman name attribute)
    product (which also contains product description attribute)
    so, i can see the Discoverer joins from the fact to the dimension tables.
    when i create a workbook (report) using Discoverer Desktop, i faced the following problems:-
    if i just have Salesman (ID or Description) as a row, everything is fine.
    if i add in the Product ID, it still works as per normal.
    but when i add Product Description into the report, i experienced a weird behaviour. somehow, the Salesman ID field will be automatically filtered to show only a certain number of records in the result (less than what it initially was) !
    usually if you put field in the Page Items section you will have to select to view <All> records, right ? citing the scenario i just mentioned above, i will only see some record values ... i won't see the <All> displayed for me to choose.
    has this got to do with the foreign key relationship from the Fact table's Product ID field to the Product Dimension table ? it worked for the Salesman dimension !!
    the only thing different about the product dimension is the product id field is VARCHAR2 length 18. you could have numeric values like 000000000012345678 (numbers are padded with zeros to make up 18 chars) and alphanumeric values like MAT_TG12345 (which are not padded with anything - left-justified). i could set that the report would only retrieve the records which has product id matching the numeric values !
    any ideas ?
    thanks to anyone who is kind enough to read this lengthy post and reply it :)

    It would appear that you do not have integrity between the fact and the product dimension.
    Does the foriegn key exisit in the database (deployed) or is it only in the OWB repository (not deployed).
    Two thing:-
    Try changin the join to an outer join.
    Take the sql from the View SQL drop down option and run in another SQL Editor.
    Neil

  • Question onTwo EUL in One environment

    Hi Experts .
    We are planning to have Fusion middleware setup for Discoverer . we have two EUL's in two different databses . Please let me know how can i create config the environment to use both the discoverer EUL's. i have created two different repositorys for the two EUL .
    Thanks
    Naga

    Hi Naga
    When you say repository - do you mean infrastructure database containing Single Sign-On? This would make things a lot trickier.
    In the vast majority of cases, you need one RCU per Sarbanes-Oxley instance - typically Production, Test and Development. With different RCUs you will have different Middlewares and therefore the URL you will call for each one will have a different fully qualified domain and port, like this:
    PROD: http://prod_server.mycompany.com:7777/discoverer/plus
    TEST: http://test_server.mycompany.com:7780/discoverer/plus
    DEV: http://dev_server.mycompany.com:7782/discoverer/plus
    These are just given as examples but I hope you get the picture.
    Does this help? Going to bed now so won't be able to look again till the morning - I am in Central time and it is now half past midnight
    Best wishes
    Michael

  • Scheduling Schema Owner in Oracle Apps EUL

    Hi All,
    Here is pertinent information about my Discoverer configuration:
    OracleBI Discoverer 10g (10.1.2.2)
    Oracle Business Intelligence Discoverer Plus 10g (10.1.2.54.25)
    Discoverer Model - 10.1.2.54.25
    Discoverer Server - 10.1.2.54.25
    End User Layer - 5.1.1.0.0.0
    End User Layer Library - 10.1.2.54.25
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    I read the Disco Admin guide (B13916_04.pdf, pages 141-151) and the Oracle® Business Intelligence Tools Release Notes
    10g Release 2 (10.1.2.1) (http://download.oracle.com/docs/html/B16031_06/toc.htm#BABJGDBB) which indicated that I can create a separate schema for scheduling Discoverer reports instead of settling for APPS.
    I ran the script batchusr_app.sql and responded to the resulting prompts as follows:
    a.     Enter Applications Foundation Name (FNDNAM): APPS
    b.     Enter the name of the Scheduled Workbook Results Schema. XX_SCHEDULING.
    c.     Enter the Batch Repository Password : password
    d.     Do you wish to alter the Default and Temporary Tablespaces for XX_SCHEDULING? Yes.
    e.     Tablespace? XX_TS_DATA
    f.     Default Tablespace ? TEMP
    g.     A Scheduled Workbook Results Schema can only have access to one EUL EUL Owner Username : EUL_US
    h.     Logged in as Scheduled Workbook Results Schema EUL5_BATCH_REP_SECURE already exists Do you wish to replace EUL5_BATCH_REP_SECURE ? Yes.
    i.     Press Enter/Return to Exit
    I entered the FNDNAM, which is APPS (the dbc file validates this).
    I explicitly granted the SELECT privilege on v_$parameter to the scheduled workbook results
    schema name (cf page 148 of the Oracle Disco Admin User Guide, B13916_04.pdf).
    **Any thoughts on why my scheduled workbooks are owned by APPS and not XX_SCHEDULING?**
    Thanks,
    Patrick
    Edited by: Patrick Bacon on Jun 8, 2009 1:58 PM

    All,
    I have found the root cause of my trouble. Our dba ran the script, batchusr_apps.sql, but I was unaware of some errors that were occurring while the script was running (e.g. "Enter database connection (e.g. T:node:sid, ServiceName) [LOCAL] :" was not responded to)). This resulted in the packages being erroneously created in the schema that the dba was logged in with while he was running the script.
    Thank you for trying to help Rod.
    Patrick

  • EUL export and import

    Scenario:
    Installation #1: Infrastructure Repository Database installed along with Oracle App Server etc. The EUL resides on the Repository Database (let's say under schema eul1). The Application Data (let's say in schema - app1) for reporting also resides in the Repository database under a different schema. Business areas and worksheets reside in the eul1 schema.
    Installation #2: This was done to separate the application data from the repository data. So now, the application data for reporting resides on another server. A new install of infrastructure/repository/App Server was done. In this installation, the eul1 EUL from installation #1 was exported (using the Oracle exp/imp utility) and imported into the new schema called eul2. The two scripts that the manual suggests were also run. The views that were used to access data in app1 are now pointing appropriately to the new server (where the application data for reporting has been installed).
    The issue:
    Using the Discoverer Administrator, I can point to the new installation and view the Business area which seems to have been migrated over correctly. However, using the Discoverer Desktop, I do not see any worksheets.
    Please tell me if there's anything else I need to do to make the worksheets available. These worksheets are working great in Installation #1 and are meant for end users to use. At this point, only one EUL is being used.
    If there's specific info you would like to have in trying to help me resolve this, please let me know. This is urgent.
    Thanks.

    Hi
    Have you exported and imported the worksheets?
    Do you see them in the Admin (folder dependencies)?
    Ott Karesz
    http://www.trendo-kft.hu

  • AS 10g Discoverer EUL Upgrade

    I have installed App Server 10g and as such the Discoverer server requires the repository to be at 5.0.1.x. My current repository is 9.0.4.43.15.0. How can I upgrade this to? What Disco Admin version shall I use? My current disco admin is from the iDS CDs for version 9.0.2.0.1. Please advise.
    Thanks!
    Karen Olszyk

    Don't know if this will help...
    My PRODUCTION version is 4.1.44.
    I created an EUL via admin using the version 5.
    Once 10g was released, I had to delete the EUL5_ EUL that I had created earlier and re-convert my version 4 to 10g version of Discoverer. (Works great now!)

  • EUL5_BATCH_REPOSITORY could not be found in the repository schema

    Hi Gurus
    I had created a new schema XXDISC to store the scheduled workbook results and executed the batchusr_app.sql to complete the necessary setups.
    I was able to schedule the workbooks as well.
    I have since dropped the XXDISC schema and now when i access the scheduling in disco plus i get the error message
    "Insufficient database privileges for scheduling: A workbook cannot be scheduled for the following reason(s) - The package
    EUL_BATCH_REPOSITORY could not be found in the Repository Schema XXDISC. Batchusr.sql should be run against the current EUL"
    Info:
    Discoverer 11.1.1.3
    Apps Mode EUL
    Regards
    Ariv

    Hi,
    Do you have a question here?
    did you try "Batchusr.sql should be run against the current EUL" ?
    the error is probably since your current EUL (DB USER) is missing the privilege to schedule, try to run the script and see if it helps
    I suggest you do that in a testing environment...
    Tamir

  • EULA Sync Error - Offline WSUS Sync Server

    been asked to look at a customers installation, where they are getting the EULA "not downloaded" error in the Wsyncmgr.log and this may be causing issues further up the chain.
     There are three air gapped environments with no WWW access in any - all WSUS downloads are off line and metadata cab and wsuscontent dirs are copied from off-line WSUS server to each Central site server in each of the three environments.
    I am getting the EULA failures on 5 updates and it looks like this has been happening for a number of months,  on investigation using the update unique ID nos these updates have either been superceded or expired -  for instance, in the wsyncmgr
    log it states
    054ee1db-db4b-4bfe-9a08-10ce84627542  - Windows 7  Internet Explorer 9.  
    but on checking both the SCCM updates repository, and the wsuscontent dir - that unique ID does not exist.
    If I check for the update Windows 7 Internet Explorer 9 -  it has a different ID number, not 054ee1db-db4b-4bfe-9a08-10ce84627542 so I cant just remove it.
    Any ideas where this "old"  ID info is being held, and how I can remove it ?
    thanks Nick B
    Solutions Architect

    The EULA failures, in the absence of any other file access issues, are likely a result of the EULAs not being properly downloaded to the
    connected server prior to performing the export. Almost universally this is caused by a network device on the perimeter blocking access to TXT files. Normally this should manifest on the
    connected server as well, when approving that update (to get the download files) and the EULA is missing.
    For superseded updates, this whole conversation is moot, because Configuration Manager 2007 is going to EXPIRE the superseded updates anyway, and you'll never be able to deploy them.
    The UpdateID will not appear in the ~\WSUSContent folder, so that won't get you anywhere. The filename in ~\WSUSContent is derived from the 40-character hex representation of the SHA-1 hash of the file. For updates with EULAs, you can obtain the actual filename
    (just as for binaries), by right clicking on the update in the WSUS console and selecting "File Information".
    So, the first question to determine is this: Is the EULA TXT file physically present in the ~\WSUSContent folder o the
    connected server?
    Lawrence Garvin, M.S., MCSA, MCITP:EA, MCDBA
    SolarWinds Head Geek
    Microsoft MVP - Software Packaging, Deployment & Servicing (2005-2014)
    My MVP Profile: http://mvp.microsoft.com/en-us/mvp/Lawrence%20R%20Garvin-32101
    http://www.solarwinds.com/gotmicrosoft
    The views expressed on this post are mine and do not necessarily reflect the views of SolarWinds.

  • Can Multiple users work on the same work Repository ?

    I have master repository and work repository on one machine, can multiple developers connect andwork on the same work repository? how?

    oh Yes!
    it is v simple.
    follow the steps:-
    once master and work repository has been created on a system.U just need to know all the information supplied wen creating a login to designer.
    like user name and password of database,url,driver as well as master's repository uname and password.
    if u have the following information with you,then u can create a new login with designer provided all the above information and u will have full access to designer
    in work repository u want to connect

  • How can I move the ODI Work Repository from one server to another server?

    How can I move the ODI Work Repository from one server to another server?

    Hi,
    If you would like to move your source models, target models and project contents from Work repository 1 to another work repository.
    I.e. Dev. server to Prod Server.
    1. Firstly, replicate the master repository connections i.e. with same naming conventions manually
    2. Go to Dev. Server work repository -> File Tab -> Click on Export work repository (save it in a folder)
    3. After exporting, you can view the xml files in the folders.
    4. Now, Open the Prod. server and make sure you already replicated mas. rep. details.
    5. Now, right click on model and import source model in synonym mode insert_update (select source model from the folder where your xml file located)
    6. Similarily, import again target then Project.
    Now, check. It should work.
    Thank you.

  • Is there a way to create a local package repository

    Is there a way to create a local package repository without technically being a mirror.  For example, setting up multiple AL box's on my network and having them grab all the latest packages from one AL box?
    Thanks,
    Craig

    What you most likely want is an ABS tree of your own, containing only the PKGBUILDs of those packages which you want to be included in your repository.
    You should already have heard of the gensync program. In short, the parameters are the root of PKGBUILDs, sorted in subdirectories (ie. like the ABS tree), the intented name and location of the repository database file, and the directory containing the binary packages.
    Let's assume you downloaded the current ABS tree to your hard drive, as well as all matching (same version as in the PKGBUILDs!) packages from a mirror, but you don't want the reiserfsprogs package in your repository. To achieve that, you must remove the /var/abs/base/reiserfsprogs directory, and may optionally remove the binary package, too. Since gensync analyzes the ABS tree you supplied as a parameter, removing the subdirectory of a specific package will cause this very package to not be included in the generated database. Assuming your packages lie in /home/arch/i686/current, your gensync call would look like this:
    gensync /var/abs /home/arch/i686/current/current.db.tar.gz /home/arch/i686/current
    If there are any discrepancies like
      - PKGBUILD, but no matching binary package found
      - PKGBUILD and binary package versions do not match
      - permission problems (writing the db file must be possible)
    gensync will gladly complain.
    Otherwise you should find the db file in the place you specified. Keep in mind that the name of the db.tar.gz file must be equal to the repository tag in the pacman.conf to use the repo.
    To make sure the db contains the right packages; use
    tar -tzf current.db.tar.gz | less
    to list the contents. Every package has it's own subdirectory including the metadata, which is rather obvious considering the file's generated from such a structure in the first place.
    The binary packages along with a correctly generated db file are all you need. Make the repository directory containing these files available through FTP if local availability doesn't cut it for you, edit your pacman.conf if needed, and use it!
    Adding packages works similar; All you need to have is the PKGBUILD in an ABS-like tree (it doesn't have to be the official tree; gensync doesn't care where the files come from. Just stick to one subdirectory per PKGBUILD, and you'll be fine), and the matching packages somewhere else, run gensync with the appropriate directories, and cackle with glee.
    HTH.

  • How to create a repository(not just custom) using your hard drive

    I don't know if many people know about this, so I am giving this a shot. There are three major articles on wiki.archlinux.org: Custom local repository,
    Using a CD-ROM as a repository, and Offline Installation of Packages. These are available online through the WIKIs at archlinux.org.
    I was first confused because when I was reading "Offline installation of packages", I didn't know what these ".db.tar.gz" files where. I came mainly from a Debian / Ubuntu
    background (I actually tried many distros before this), so getting used to the way the repository works and no graphical install manager for it. However, I enjoyed a challenge and
    I found out that these are database packages that contain descriptions and locations on where these files are located. The ones on the ftp server are already compiled. I don't know if,
    however they are compiled with the most recent versions.
       With all that said, I thought you had to have it all in one directory in order for this to work, but as it turns out, location is not really an issue. I decided to have a directory reside on the root.
    I chose root because it's only for the install of my own packages. I could have done it as a seperate user account, such as "repos" in PCLinuxOS (another distro I tried). I didn't want to have a seperate account for this. Therefore, I created "/root/repository". Within this directory I created directories for all repository archives. I basically did a "cd /mnt/dvd" and migrated to the particular repository directories. I would copy all the "pkg.tar.gz" files into their respective directories with "cp * ~/repository/<name-of-dir>". For intance, I started with the "core" directory, because there was some things I didn't install in the core directory during installation and if the packages needed it, it was there. This follows for the rest of the directories, such as "community", "testing", and "unstable", etc.You can go to the ftp mirrors to find out what directories are available. The main point is that your files should be in the format ".pkg.tar.gz". These are package files that get converted into a sort of database format that as I mentioned, informs the system the description and where the files are located, and so on.
       The command to perform this, is "tar -xvf /root/repository/core/core.db.tar.gz *.pkg.tar.gz". You can replace core with whatever repository you are adding. So, for example, "extra.db.tar.gz" would be in the "extra" directory. This information is located in the "Offline installation of packages".  The command to create this database is called, "repo-add".
    The format for this command is "repo-add /path/to/dir.db.tar.gz *.pkg.tar.gz". So, if it's the core packages you would "cd ~/repository/core" and "repo-add core.db.tar.gz *.pkg.tar.gz".
      Then, you need to edit the "/etc/pacman.conf" configuration file for pacman. I basically would comment all out except for the repositories I need. So, for example "[core]" and "/etc/pacman.d/core" would tell where normally the servers are located for these files. This information is located int the "Custom local repository" article.using the "repo-add" command.
       Furthermore, I edited each server file located in "/etc/pacman.d/<repository>" where repository is core, extra, etc. I would perform,  "nano /etc/pacman.d/core" for example and comment out all servers. I then add a "local repository" by typing in "file:///root/repository/core", saved it, and then did a "pacman -Sy" to update the repository database. Now, I can do "pacman -S <package-name>" where package-name is whatever I wanted to install. Voila! Please let me know of any suggestions, questions, insights, or comments. I hope I'm not missing anything in this article. I do remember using "rm -rf * in the "/var/lib/pacman/<repository>"directories and using "tar xvf <repository>.db.tar.gz". I don't if that something to do with it, though. Be careful with the "rm -rf *" command, because you can erase your hard drive if you are not careful, for those who aren't informed.
    P.S. Please note all these are done with the root user.

    pressh wrote:
    gradgrind wrote:
    smitty wrote:pressh, I understand and appreciate your point of view... well taken! Are you implying that I should have written in steps, such as 1, 2, and 3? Also, should I have got ridden of the redundant information if it is contained in the Wiki article and / or  taken out the commands on how to apply them and left only with the explanation? Is this what you imply? Sorry if I seem redundant with these questions, but I'm curious so I can improve for the future. I am new to this and open to any suggestion and comments.
    Maybe you could either edit the existing wiki pages where they were not clear to you, or else add a new wiki page, or both. Certainly give the whole a clearer (visual) structure, and (if they don't already exist) add links between the connected wiki pages.
    Yes that is partly what I mean. Further you could get rid of the information that is not really needed to follow the guide (for example what the command 'repo-add' does. People could if they are interested look it up in the script itself, or you could add it here and link to it).
    And yes a bit of structure would be nice. You don't have to nessesarily call it 1,2,3, as long as it has some kind of structure in it (the visual point is very important here). You could take a look at existing wiki pages on the web and see how most of them (not all of them are good of course) are structured.
    That's a good point, too. How do I found out what articles are more effective? I am doing research on this particular matter at the moment and came across articles that have tips on technical writing. Could this help in the long run? Or, is it better to get feedback from other users and improve that way? In other words, do first, and ask later, as one user point out?

  • Error in installing a new Repository in OWB 10g Release 2

    Hi,
    I am facing a consistent problem in creating a new repository, even after uninstalling and re-installing the OWB client many times. While creating a repository, I get the following three errors, after which the Repository Assistant automaticlly shuts down:
    1. The wizard noticed that the installation parameter 'enqueue_resources' of the database GRUSIT is set to 968. The recommended value for Warehouse Builder is 3000.
    2.The Warehouse Builder repository owner installation failed on user REPOWNER.
    java.sql.SQLException: ORA-04031: unable to allocate 4080 bytes of shared memory ("shared pool", "BEGIN
    DECLARE
    PROCEDURE brow...", "PL/SQL MPCODE", "pl/sql DS pg")
    3.INS0029: Error occured during installation. Check the log file
    F:\OraHome_1\owb\UnifiedRepos\log_070504_115828.000.log
    for details.
    Could you pls help me in resolving this issue?
    Thanks in advance,
    Tanvi

    Does this mean, ... we have to install the OWB 64 bit and install a new repository in the 64 bit server?In my opinion you don't need to create new repository.
    After migrating database to 64bit perform steps from Metalink note 434272.1 "How To Update Warehouse Builder 10.2 After A Database Cloning".
    It is better to save the same path for new OWB 64bit software. If you install OWB into different path you need to update OWBRTPS table with new path to OWB software (look at metalink note 550271.1).
    Regards,
    Oleg

  • Error while registering a new repository on Data Services 4.0

    Hi all,
    I've a BO Enterprise Platform 4.0 + Data Services 4.0 installation, and I'm trying to register a new local repository in addition to the installation default one. I followed these steps:
    1) Create a new database schema for the repository
    2) Created the local repository with repository manager
    3) Created a new job server associated with the new repository through the server manager
    4) Tried to register the new repository in the CMC
    I wasn't able to complete the step 4 since i got the following error "unable to connect to profile server".
    Any clue?
    Thank you very much
    Pietro
    Edited by: Pietro Castelli on Jan 30, 2012 12:03 PM

    what is the complete version of DS 4.0 ? also check the REPO_TYPE in AL_VERSION table, for local repo this will be NULL

  • Error while generating DDL commands using startSQLRepository for a new Repository

    Hi,
    I am trying to generate DDL Commands using startSQLRepository for my new repository SubashRepository so that I can use them to create new table structure.
    All the repository related changes looks good. i can see my repository in ACC
    When I run the command:
    startSQLRepository -m SupremeATG –repository /com/supreme/SubashRepository /com/supreme/subashRepository.xml -outputSQLFile C:/ATG/ATG9.3/SupremeATG/config/com/supreme/subashRepositoryDDL.txt
    I get following error:
    Table 'SUBASH_MEMBER' in item-descriptor: 'member' does not exist in a table space accessible by the data source.  DatabaseMetaData.getColumns returns no columns
    Note:
    * errors related to definition file were cleared as the same command threw relevant exception while trying to store a array property with out creating a multi table.
    * Now this is the only exception i see
    * Some DDL are getting generated in the output file, but those are related to inventory repository (I am not sure why this is happening as I have specifically gave the path to my definition file).
    Any help in resolving this is highly appreciated.

    Pl post in the ATG forum

Maybe you are looking for

  • Can we give a class name by java keyword ?

    can we give a class name by java keyword ?e. can we create a class call "for" / "if" anyhow ? Is there some way to create such kind of classes .. Is is possible with some other java like language.. & if there is any language & how one can embed the s

  • Please Help me in WIFI PROBLEM N97 MINI

    I doubt anyone will check that old topic which has four + pages so posting here again for urgent and helpful replies... Guys i have bought n97 mini about a week back and experiencing this same problem. of wi-fi contionuously dropping when i am in goo

  • Error 1406: could not write a value to key

    error 1406. could not write value to key \software\classes\.cdda\openwithlist\itunes.exe. verify that you have sufficient access to that key, or contact you support personnel. I get this each of the 1000 times i try to instal itunes + quicktime from

  • Dynamically call VI containing only a user prompt

    Is it an acceptable practice to call a vi dynamically that contains only a user prompt, for the purpose of prompting the user without suspending the main vi?  Thanks, Greg v7.1

  • No Range for Contract Offer and Contract

    Hi Experts, I have to Give the no range to contract offer and contact as per the contract type. We have three contract type 1) Factory LO 2) Land LO AND Building LO. Kindly guide me how to give no range to contract offer and contract. Waiting for you