Reconciling Repository

I have a repository in my computer. I captured all tables I wanted. Now, I trying reconciling repository with my Database. In the Reconciling Reports, there are errors in my views (there are differences in repository’s views and database’s view - object type - PL/SQL definition), but I don't find the reason, because the SQL instructions is same.
Other error in the Reconciling Reports is relate sequences. The Reconciling Reports say then repository's sequences is NOCYCLE and database's sequences is CYCLE, but I changed the repository's sequence for CYCLE.
Other problem is relate Procedures and Functions (in my Database, there are packages calling procedures and functions private, then the Reports say “Procedure/Function not in Database”).
Help me, please!!!!

Olá Jamir,
obrigada por se dispor a me ajudar.
O meu problema é o seguinte:
estou começando a usar o Designer, agora, há uns 20 dias. Estou tentando fazer uma engenharia reversa de um banco de dados Oracle 9i. Então, a primeira coisa que fiz foi capturar todos os meus objetos do banco para um repositório no Designer. Isto está ok. Bom, agora, estou gerando a comparação do banco com o repositório. O problema está aí. Quando o Designer gera o Relatório de Reconciliação, dá alguns erros, como: ele diz que minhas sequences do banco são cycle e as do repositório são no cycle, então ele gera um script de alteração para minhas sequences do banco para no cycle. Mas, na verdade, as sequences do repositório estão cycle, tenho certeza. Outro problema, é que tenho três packages em meu banco que têm funções e procedures construídas dentro delas, ou seja, não estão no banco na "aba" function/procedure, e sim, definidas apenas dentro das packages. E o Designer, no momento da captura, definiu estas procedures e functions fora das packages, apenas, fazendo a chamada por dentro. Assim, no Relatório de Reconciliação dá erro dizendo que existem procedures e functions no meu repositório que não existem no meu banco.
Outra dúvida que tenho, e se consigo fazer a captura de uma tabela, trazendo os dados (registros) que ela contém.
Você poderia me ajudar???

Similar Messages

  • Initial Load and Reconcile of IdM accounts vs Resources. Any advices?

    Hellos.
    Time is now to begin the initial startup of Idm where target resources already have entries.
    Is there anywhere an idiots guide explaining to a novice IdM user the steps involved in loading and linking IdM accounts for the first time? I have to stress the guide has to be designed to meet novices needs i.e. those doing it for the first time.
    We are fairly confident in running IdM from day 1. What I am unsure about is the best techniques to employ on day 0 so that the first FF Async run on day 1 will update IdM accounts rather than attempt to reinsert them.
    We have 3 target resources: 1 AD 2 LDAP and 1 input FF Async source (plus a form for manual adds/mods)
    The targets hold a mix of contract staff and employees. The FF source holds just employees, the contactor's data is entered by hand.
    Am I being stupid if I do the following:
    1. Load IdM Accounts from latest FF source (employees)
    2. reconcile matching AD accounts by name.
    3. reconcile matching LDAP1 accounts by name.
    4. reconcile matching LDAP2 accounts by name.
    5. from unmatched AD accounts since an AD account must exist for current staff make 2nd load file for contractors and load these accounts into IdM.
    6. reconcile matching AD accounts by name
    7. reconcile matching LDAP1 accounts by name
    8. reconcile matching LDAP2 accounts by name
    9 examine the leftovers and manually correct the mispelt or erroneous ones treating the rest as ghosts.
    I am unsure just what the reconciliation is doing, I hope it builds the links between IdM and the resource.
    What I am trying to achieve is a situation where I end up at the end of day0 where I have all the Idm accounts in my repository and are correctly linked to resource accounts so that the FF async can manage the bulk of them
    I believe this has to be done for every IdM implementation. What I am after is advice and pointers to guidance from those who have had the experience of going though the implementation cycle at least once.

    I had a similar problem although I was doing a load from file and I was linking the account to the resource.
    I noticed you had:
    <Field name='accounts[ISA].created'>
    <Default>
    <s>true</s>
    </Default>
    </Field>
    Try using the following instead:
    <Field name='waveset.accounts[GrandSlamXML].created'>
    <Expansion>
    <Boolean>true</Boolean>
    </Expansion>
    </Field>     
    That is what worked for me.

  • Mappings out of synch with OWB repository

    Hi,
    I have a global problem with my repository, every mapping reports validation errors (many) advising to reconcile inbound or outbound.
    First question is how could this happen on such a global scale (affects every mapping) ?
    Second is how to fix, when I reconcile inbound a) All the links to columns are lost and I have to manually re-attach which is very risky let alone tedious b) After reconcile inbound some tables dont match what is in the repository (columns missing). The only way around this was to delete and drag the table back into the mapping and re-attach columns.
    Third question is, since the prod version of these mappings are running successfully if I deploy a mapping that is out of synch with the repository will it still function as it should ?
    Any help appreciated.
    Cheers,
    Brandon

    I think the problem you are having is that your mapping objects are not reconciled to the repository objects. It is purely a logical OWB problem of matching an mapping operator to an OWB object. It should not effect your busines logic or code that is generated by OWB.
    The warnings are just that, warnings. If you were missing a connection between objects, you'd get an error.
    Even though an operator in your map exists as an object in your repository, they are not properly bound. If your reconcile inbound each object (match by UOID and by name), you should stop getting the warnings.
    This is one of those quirks with OWB (at least with 9.2.0.8 that I am using). I noticed that if I import a mapping from another repository, the operators become unbound. I've given up trying to fix them each time. I just ignore the warnings.

  • URGENT: All accounts on resource appear as deleted on reconcile

    IDM 6.0 SP1 Production Issue.
    I have a problem where all accounts on a resource type have started appearing as DELETED when I do a full reconcile. This has only started today and worked in the scheduled reconcile this morning.
    Resource Type is ORACLE, but I had a similar problem with a scripted gateway adapter a couple of weeks ago and worked around that by converting it to a database adapter (my thinking was that there was something wrong with the scripted gateway adapter).
    Anybody seen this kind of thing before?
    Possible repository corruption?

    Solution found: The client has different naming standards for OPS$ accounts in production and acceptance and the code change was missed on the go-live.

  • Error while reconciling - Data is not Base64 encoded

    I was trying do a Full reconciliation and it failed with following error.
    Reconciliation was terminated because the number of errors exceeded the configured threshold.
    The following errors were received during reconciliation:
    Error while reconciling accountId uid=1testRecordReviewOnl,ou=people,ou=AppUser,dc=educ,dc=mde on resource MIDMS Edp LDAP:
    java.lang.RuntimeException: Data is not Base64 encoded.
    I removed the password attribute from the reconciliation policy and reconciliation ran till the end without any error . What I need to do to fix this.
    Thanks!
    Kabi
    Edited by: kpp on May 5, 2009 7:25 PM

    I am using IDM 8.0.0.0 which dows not allow to load "malformd passwords" into the repository at all. This hit might help you to understand IDM but most likely will not directly solve you problem.
    As mentioned before: Debug page: http://<server>/idm/debug
    Locate the "List Objects" row, select "Resources", push the "List Objects" button.
    Select your resource; Click View; search for "credentials"
    The encrypted credential (aka password) has to be syntactically correct.
    e.g.
    <ResourceAttribute name='credentials' displayName='com.waveset.adapter.RAMessages:RESATTR_PASSWORD' type='encrypted' description='RESATTR_HELP_219' value='0D84E1E9E897E0A5:1C6C4889:11C74E0F02B:-7FF1|hjp/S8m0uNe5Rxs4CGU7uw=='>
    </ResourceAttribute>
    The vertical bar sign "|" splits the credentials in two parts
    1. Server encryption key
    2. Base64 encoded password
    If the later is not Base64 encoded, you will get the error described.
    Again, I doubt that this is or solves your problem but it might help to understand.
    btw. what exactely do you mean with "I removed the password attribute from the reconciliation policy "

  • Design Repository Configuration objects after an import

    I have 1 design repository to development environment and another to production environment.
    I have 3 target schemas in Development and 3 target schemas in Production.
    Then I make a Metadata Export of my Development Project and a Metadata Import in my Production environment (MDL).
    Now I have all objects in my Design Repository in Production environment.
    I know that I have to change the Locations parameters (for example: The Host Name and Port Number)
    Do I have to change anything else?
    What do I have to change?
    Thanks!

    Apart from location and physical configuration parameters, dont we have to re-import(reconcile) the tables and PL/SQL transformations also?
    We faced a similar situation, when we imported the project in different environment and changed the location, the validation gave an error.re-importing the tables and PL/SQL transformations solved it.
    we are using OWB 10.1 with oracle 10g on windows2000 server.
    regards
    Sagar

  • New column names during Reconcile Outbound

    I am trying to create a target table with readable column names from a source table that has very cryptic column names.
    In a mapping I have defined my source table, a simple date filter and created my target table using the 'Create Unbound Object with No Attributes' option. I then mapped all columns from the INOUTGRP from the filter operation to the target table and edited the column names in the target table.
    I then wanted to have the target table bound to an actual table using the 'Reconcile Outbound' option and tell it to 'Create a new table'. When I did that, it created the table with the souce-style unreadable column names.
    I then tried to pre-create the table with no columns and when I did the 'Reconcile Outbound' I selected 'Reconcile with an existing table', selected my pre-created table with no columns and checked 'Match by bound name'. It created the columns in the target table but again with the unreadable names. I also tried 'Match by position' with the same result.
    How can I create a table through the mapping and change the column names?
    Do I have to pre-create the target table with the column names I want and use that in the mapping operation?
    Any help is greatly appreciated.
    Gary

    I got this to work by changing the columns in the FILTER operator rather than in my target table.
    It worked when I created the target table using an unbounded table object with no attributes and then linking the INOUTGRP from the FILTER operator to the target table and then creating the table in the repository by using Outbound Reconciliation.
    I'm still not sure why I couldn't rename the columns in the unbound table before doing the Outbound Reconciliation????
    Thanks for the help.
    Gary

  • Locked by Reconciler

    We are receiving the following error multiple times during reconcilation of active directory. We are using Identity Manager 6.0sp1. I have not seen this running AD recon on previous versions of Identity Manager. There is only one reconciliation running. This is running against a MySql repository, but we get similar results for SQL Server.
    Error while reconciling accountId cn=SMITH\, JOHN,ou=CLIENTS,dc=CUSTOMER,dc=COM on resource AD:
    com.waveset.exception.LockedByAnother: Object is locked by Reconciler laptop.reconcile.cn=SMITH\, JOHN,ou=CLIENTS,dc=CUSTOMER,dc=COM.

    may be manullay unlock the user object. Best way is, use the lhconsole option
    --sFred                                                                                                                                                                           

  • Repository Reports - Which one?

    Hi guys
    Does anyone know which report will compare the contents of a container against a database? The results of the report should show what exists in the database and not in the repository and vice versa. I know that this was a report in Designer 6.0, is this report still available in Designer 9i?
    Thanks in advance
    Ciaran

    The report is actually part of the Server Generation, and the contents are controlled via Preferences.
    For your container of interest, in Design Editor:
    - set Server Generator preferences for Reconcile Report to show Differences Only, show Structure but not Implementation (cause you probably don't care about tablespace and storage diffs), but leave it as Report + SQL (cause it is always nice to look at the SQL Alters that get generated).
    - start up a Server Generation. In the first tab choose "Gen direct to database" and give it the connection info for your existing database. Note it will assume that you want to look at elements owned and directly accessible to this login account. In the second tab choose all the elements from your container that you want to generate / compare (use double-arrow to select all).
    - perform the generation. You'll notice that you get some buttons in the result panel. One lets you access the Reconciliation Report (what you're asking for), one accesses the generated SQL (Alters / drops / creates / etc), and one executes the generated SQL (which you probably shouldn't push --- not right yet anyways).
    - depending on the differences, you'll also get asked about how you want to handle constraints and columns and suchlike. This will impact the generated SQL but not the report.
    The report will show you ins / outs / diffs for the entire set you specified.
    ((( you can also do this by reverse engineering the target database into a 2nd container and then using Compare between the containers -- but this is much more work and much messier )))
    enjoy

  • RMAN repository clean up

    Hi,
    I am having 10g db and backup policy uses RMAN.
    Daily backup is taken by RMAN and my retention policy is default.
    Now my total catalog db size is 12 GB,
    but i want to remove entries before 1 months from repository because all the archives are already taken in backup and also applied to stand by db.
    Is there any method to clean up RMAN repository ?
    Thanks in advance

    Viral,
    This is what I do to reconcile the contents of the recovery catalog with the backup files on disk. Check out Oracle's documentation on RMAN commands before executing these commands.
    Bill
    $ su – oracle
    $ export ORACLE_SID=<SID of target database>
    $ rman rcvcat rman/<password>@<recovery catalog instance>
    RMAN> connect target
    RMAN> LIST EXPIRED BACKUP;
    RMAN> CROSSCHECK BACKUP;
    RMAN> LIST EXPIRED BACKUP;     
    RMAN> DELETE EXPIRED BACKUP;               (Respond “y” to delete objects.)
    RMAN> LIST EXPIRED BACKUP;     
    RMAN> LIST BACKUP OF DATABASE;
    RMAN> LIST BACKUP OF DATABASE SUMMARY;
    RMAN> LIST EXPIRED ARCHIVELOG ALL;
    RMAN> CROSSCHECK ARCHIVELOG ALL;
    RMAN> LIST EXPIRED ARCHIVELOG ALL;
    RMAN> DELETE EXPIRED ARCHIVELOG ALL;     (Respond “y” to delete objects.)
    RMAN> LIST EXPIRED ARCHIVELOG ALL;
    RMAN> LIST BACKUP OF ARCHIVELOG ALL;
    RMAN> LIST BACKUP OF ARCHIVELOG ALL SUMMARY;

  • Can Multiple users work on the same work Repository ?

    I have master repository and work repository on one machine, can multiple developers connect andwork on the same work repository? how?

    oh Yes!
    it is v simple.
    follow the steps:-
    once master and work repository has been created on a system.U just need to know all the information supplied wen creating a login to designer.
    like user name and password of database,url,driver as well as master's repository uname and password.
    if u have the following information with you,then u can create a new login with designer provided all the above information and u will have full access to designer
    in work repository u want to connect

  • How can I move the ODI Work Repository from one server to another server?

    How can I move the ODI Work Repository from one server to another server?

    Hi,
    If you would like to move your source models, target models and project contents from Work repository 1 to another work repository.
    I.e. Dev. server to Prod Server.
    1. Firstly, replicate the master repository connections i.e. with same naming conventions manually
    2. Go to Dev. Server work repository -> File Tab -> Click on Export work repository (save it in a folder)
    3. After exporting, you can view the xml files in the folders.
    4. Now, Open the Prod. server and make sure you already replicated mas. rep. details.
    5. Now, right click on model and import source model in synonym mode insert_update (select source model from the folder where your xml file located)
    6. Similarily, import again target then Project.
    Now, check. It should work.
    Thank you.

  • Is there a way to create a local package repository

    Is there a way to create a local package repository without technically being a mirror.  For example, setting up multiple AL box's on my network and having them grab all the latest packages from one AL box?
    Thanks,
    Craig

    What you most likely want is an ABS tree of your own, containing only the PKGBUILDs of those packages which you want to be included in your repository.
    You should already have heard of the gensync program. In short, the parameters are the root of PKGBUILDs, sorted in subdirectories (ie. like the ABS tree), the intented name and location of the repository database file, and the directory containing the binary packages.
    Let's assume you downloaded the current ABS tree to your hard drive, as well as all matching (same version as in the PKGBUILDs!) packages from a mirror, but you don't want the reiserfsprogs package in your repository. To achieve that, you must remove the /var/abs/base/reiserfsprogs directory, and may optionally remove the binary package, too. Since gensync analyzes the ABS tree you supplied as a parameter, removing the subdirectory of a specific package will cause this very package to not be included in the generated database. Assuming your packages lie in /home/arch/i686/current, your gensync call would look like this:
    gensync /var/abs /home/arch/i686/current/current.db.tar.gz /home/arch/i686/current
    If there are any discrepancies like
      - PKGBUILD, but no matching binary package found
      - PKGBUILD and binary package versions do not match
      - permission problems (writing the db file must be possible)
    gensync will gladly complain.
    Otherwise you should find the db file in the place you specified. Keep in mind that the name of the db.tar.gz file must be equal to the repository tag in the pacman.conf to use the repo.
    To make sure the db contains the right packages; use
    tar -tzf current.db.tar.gz | less
    to list the contents. Every package has it's own subdirectory including the metadata, which is rather obvious considering the file's generated from such a structure in the first place.
    The binary packages along with a correctly generated db file are all you need. Make the repository directory containing these files available through FTP if local availability doesn't cut it for you, edit your pacman.conf if needed, and use it!
    Adding packages works similar; All you need to have is the PKGBUILD in an ABS-like tree (it doesn't have to be the official tree; gensync doesn't care where the files come from. Just stick to one subdirectory per PKGBUILD, and you'll be fine), and the matching packages somewhere else, run gensync with the appropriate directories, and cackle with glee.
    HTH.

  • How to create a repository(not just custom) using your hard drive

    I don't know if many people know about this, so I am giving this a shot. There are three major articles on wiki.archlinux.org: Custom local repository,
    Using a CD-ROM as a repository, and Offline Installation of Packages. These are available online through the WIKIs at archlinux.org.
    I was first confused because when I was reading "Offline installation of packages", I didn't know what these ".db.tar.gz" files where. I came mainly from a Debian / Ubuntu
    background (I actually tried many distros before this), so getting used to the way the repository works and no graphical install manager for it. However, I enjoyed a challenge and
    I found out that these are database packages that contain descriptions and locations on where these files are located. The ones on the ftp server are already compiled. I don't know if,
    however they are compiled with the most recent versions.
       With all that said, I thought you had to have it all in one directory in order for this to work, but as it turns out, location is not really an issue. I decided to have a directory reside on the root.
    I chose root because it's only for the install of my own packages. I could have done it as a seperate user account, such as "repos" in PCLinuxOS (another distro I tried). I didn't want to have a seperate account for this. Therefore, I created "/root/repository". Within this directory I created directories for all repository archives. I basically did a "cd /mnt/dvd" and migrated to the particular repository directories. I would copy all the "pkg.tar.gz" files into their respective directories with "cp * ~/repository/<name-of-dir>". For intance, I started with the "core" directory, because there was some things I didn't install in the core directory during installation and if the packages needed it, it was there. This follows for the rest of the directories, such as "community", "testing", and "unstable", etc.You can go to the ftp mirrors to find out what directories are available. The main point is that your files should be in the format ".pkg.tar.gz". These are package files that get converted into a sort of database format that as I mentioned, informs the system the description and where the files are located, and so on.
       The command to perform this, is "tar -xvf /root/repository/core/core.db.tar.gz *.pkg.tar.gz". You can replace core with whatever repository you are adding. So, for example, "extra.db.tar.gz" would be in the "extra" directory. This information is located in the "Offline installation of packages".  The command to create this database is called, "repo-add".
    The format for this command is "repo-add /path/to/dir.db.tar.gz *.pkg.tar.gz". So, if it's the core packages you would "cd ~/repository/core" and "repo-add core.db.tar.gz *.pkg.tar.gz".
      Then, you need to edit the "/etc/pacman.conf" configuration file for pacman. I basically would comment all out except for the repositories I need. So, for example "[core]" and "/etc/pacman.d/core" would tell where normally the servers are located for these files. This information is located int the "Custom local repository" article.using the "repo-add" command.
       Furthermore, I edited each server file located in "/etc/pacman.d/<repository>" where repository is core, extra, etc. I would perform,  "nano /etc/pacman.d/core" for example and comment out all servers. I then add a "local repository" by typing in "file:///root/repository/core", saved it, and then did a "pacman -Sy" to update the repository database. Now, I can do "pacman -S <package-name>" where package-name is whatever I wanted to install. Voila! Please let me know of any suggestions, questions, insights, or comments. I hope I'm not missing anything in this article. I do remember using "rm -rf * in the "/var/lib/pacman/<repository>"directories and using "tar xvf <repository>.db.tar.gz". I don't if that something to do with it, though. Be careful with the "rm -rf *" command, because you can erase your hard drive if you are not careful, for those who aren't informed.
    P.S. Please note all these are done with the root user.

    pressh wrote:
    gradgrind wrote:
    smitty wrote:pressh, I understand and appreciate your point of view... well taken! Are you implying that I should have written in steps, such as 1, 2, and 3? Also, should I have got ridden of the redundant information if it is contained in the Wiki article and / or  taken out the commands on how to apply them and left only with the explanation? Is this what you imply? Sorry if I seem redundant with these questions, but I'm curious so I can improve for the future. I am new to this and open to any suggestion and comments.
    Maybe you could either edit the existing wiki pages where they were not clear to you, or else add a new wiki page, or both. Certainly give the whole a clearer (visual) structure, and (if they don't already exist) add links between the connected wiki pages.
    Yes that is partly what I mean. Further you could get rid of the information that is not really needed to follow the guide (for example what the command 'repo-add' does. People could if they are interested look it up in the script itself, or you could add it here and link to it).
    And yes a bit of structure would be nice. You don't have to nessesarily call it 1,2,3, as long as it has some kind of structure in it (the visual point is very important here). You could take a look at existing wiki pages on the web and see how most of them (not all of them are good of course) are structured.
    That's a good point, too. How do I found out what articles are more effective? I am doing research on this particular matter at the moment and came across articles that have tips on technical writing. Could this help in the long run? Or, is it better to get feedback from other users and improve that way? In other words, do first, and ask later, as one user point out?

  • Error in installing a new Repository in OWB 10g Release 2

    Hi,
    I am facing a consistent problem in creating a new repository, even after uninstalling and re-installing the OWB client many times. While creating a repository, I get the following three errors, after which the Repository Assistant automaticlly shuts down:
    1. The wizard noticed that the installation parameter 'enqueue_resources' of the database GRUSIT is set to 968. The recommended value for Warehouse Builder is 3000.
    2.The Warehouse Builder repository owner installation failed on user REPOWNER.
    java.sql.SQLException: ORA-04031: unable to allocate 4080 bytes of shared memory ("shared pool", "BEGIN
    DECLARE
    PROCEDURE brow...", "PL/SQL MPCODE", "pl/sql DS pg")
    3.INS0029: Error occured during installation. Check the log file
    F:\OraHome_1\owb\UnifiedRepos\log_070504_115828.000.log
    for details.
    Could you pls help me in resolving this issue?
    Thanks in advance,
    Tanvi

    Does this mean, ... we have to install the OWB 64 bit and install a new repository in the 64 bit server?In my opinion you don't need to create new repository.
    After migrating database to 64bit perform steps from Metalink note 434272.1 "How To Update Warehouse Builder 10.2 After A Database Cloning".
    It is better to save the same path for new OWB 64bit software. If you install OWB into different path you need to update OWBRTPS table with new path to OWB software (look at metalink note 550271.1).
    Regards,
    Oleg

Maybe you are looking for