I docs database cleaning

Hi,
what  would  be  the  best  way  to  clean  the  erroneos  I Docs  from  database?
Can  you  sugest  the  best  way  to  delete them or ....?
Thanks and  best  regards
L

How are you generating the IDOC?  Your distribution model maintains the receivers (logical systems) for a given message type.  You can maintain as many receivers (clients) as you like in the model and distribute based on that.  Each logical system has a port in the partner profile, but that's not related to runtime distribution, except when it's time to push that message to the receiver.

Similar Messages

  • Database clean shutdown status Issues with ESEUTIL

    hi all i am currently trying to go the process of taking my database backup from a dirty shutdown into a clean shutdown state with ESEUTIL.
    a user has lost some items from there sent items folder so i am restoring it for them.
    i have checked the state of the database and it shows dirty shutdown so i am running the command below to replay the log files.
     C:\>eseutil /r E00 /l "C:\database recovery\Mailbox Database 1813862868\data" /d "C:\database recovery\Mailbox Database 1813862868\data" -verbose
    it begins to process but them comes up with the following error suggesting i use the /a argument, i cant seem to find much information on this wondered if anyone had any suggestions. i ran the -verbose command to help try and narrow it down 
    can some one help with what the /a argument does or if they have had this issue themselves a possible way to resolve it.
    many thanks
    Gordon

    Thanks for the information and;
    1. I would start with your VSS backup provider Cloudberry since as part of the backup validation it shouldn't have passed if those files were missing and upon restore of the EDB it should have also provided all needed Log files.  Now its possible they
    were missing prior to backup and that could be for any number of reasons, i.e. Anti Virus if improperly configured could have deleted or quarantined them etc, however more concerning is that the backup provider called that backup good.  I would look at
    the backup logs to see what they reported as its possible they did report an issue during backup.  The other possibility is that the files are within the backup but didn't restore for some reason and for that you would need to do another test restore
    to validate.  Also could be that upon restore the logs were deleted by your Anti Virus provider and of course you can check the AV logs and quarantine...
    2. Running a /P against a restored backup for item level recovery is not as big of a deal as it would be if you were putting the db back into production.  True you can still lose data with a /P but for what you were doing not an issue IMO especially if
    you recovered the needed data.
    3. The log loss also was in the middle of the needed log chain which is never a good sign. Sometimes you can get by that issue by removing all logs after the missing log file and just trying to play it up to that point in time.
    P.S. If you are missing the log files post backup and want to avoid running a /P on an inconsistent/dirty database the only choices are to;
    A: find the logs (Look at AV logs and quarantine or perhaps re restore the files again from backup)
    B: do the /P
    C: Check out our DigiScope product that can bypass most /P issues by using our Forensic mount to open the database and then extract the needed information to PST or restore it directly to any mailbox on the original or alternate exchange server (even alternate
    versions i.e. 5.5-->2000-->2003-->2007-->2010-->2013 etc.)
    Search, Recover, & Extract Mailboxes, Folders, & Email Items from Offline Exchange Mailbox and Public Folder EDB's and Live Exchange Servers or Import/Migrate direct from Offline EDB to Any Production Exchange Server, even cross version i.e. 2003 -->
    2007 --> 2010 --> 2013 with Lucid8's
    DigiScope

  • Does the Inventory database clean itself?

    Scenario:
    NW 6.5 SP5
    eDir 8.7.3.8
    ZfD 6.5 SP2
    Issue:
    Need to delete and reimport all wks in eDir.
    Details:
    All workstations are imported into eDir, and scanned by ZfD 6.5 Inventory services.
    A number of the wks are (due to historical reasons) imported into wrong contexts,
    and a number of them are imported with a wrong wks name too.
    What happens with the Inventory database if I delete all imported wks objects,
    and then reimports them with a new name, and some of them also into another OU?
    Can I just do this, and thereafter start a full new inventory scan?
    What will happen with the old database entries for the wks that will be imported into a new OU,
    and with the wks that will import under another name..?
    In other words, will the inventory database be cleansing itself for the old wks entries?
    - Erik

    On Mon, 14 Aug 2006 09:15:32 GMT, Erik Aaseth wrote:
    > In other words, will the inventory database be cleansing itself for the old wks entries?
    normally yes.. take a look at the documentation and the sync schedule of
    the inventory service object.
    Marcus Breiden
    If you are asked to email me information please change -- to - in my e-mail
    address.
    The content of this mail is my private and personal opinion.
    http://www.edu-magic.net

  • Understanding docs

    From oracle docs 9i,
    When backing up datafiles, the target database must be mounted or open. If the database is in ARCHIVELOG mode, then the target can be open or closed: you do not need to close the database cleanly. If the database is in NOARCHIVELOG mode, then you must close it cleanly before making a backup.
    my database is in archive log mode,following what is written in italics,I shutdown my database and tried to backup database with RMAN.but RMAN showed:
    RMAN-06403:could not obtain a fully authorised session
    ora-01034:oracle not avalable
    Oracle terminology really confuses me:(

    Either you cite the documentation incorrectly, but what you quoted here is definitely wrong.
    Database in archivelog: you don't need to do anything, you don't need to shutdown. You can backup your database using RMAN provided you also backup the archivelogs.
    Database in noarchivelog:
    If your database is open you need to shutdown your database using shutdown normal or shutdown immediate. Rman can do this for you.
    Subsequently you need to issue STARTUP MOUNT (please note the MOUNT)
    Now you can backup your database.
    You don't need to backup archivelogs, provided you retain the backup.
    After backing up you issue
    alter database open.
    This procedure is known as cold or offline backup.
    It can be performed on noarchivelog and archivelog databases.
    Hot backup on archivelog databases only.
    Hth
    Sybrand Bakker
    Senior Oracle DBA

  • Air - Sqlite with Adobe Air insert data in memory, but do not record on the database file

    I have the code:
    var statement:SQLStatement = new SQLStatement();
    statement.addEventListener(SQLEvent.RESULT, insertResult);
    statement.addEventListener(SQLErrorEvent.ERROR, insertError);
    statement.sqlConnection = sqlConnection;
    statement.text = "insert into table values('"+TINome.text+"','"+TISerial.text+"') ";
    statement.execute();
    This run without error and the data is inserted (i dont know where), but when i see into database with firefox sqlite manager, the database is empty! But the Air continue run for a time like the data was recorded on the database file. I dont know what is happen with it.
    Help please!

    Toplink In Memory was developed by our project to solve this problem. It allows us to run our test either in memory or against the database.
    In memory, we basically stub out the database. This allows us to speed up our tests about 75x (In memory we run 7600 tests in 200 secs, it takes about 5 hours against the database). However, it throws away things like transactions, so you can't test things like rollback.
    In database mode, it just uses toplink, Another benefit of it though is that it watches all the objects created allowing an automatic cleanup of created test objects - keeping your database clean and preventing test interactions.
    We used HSQL running in memory previously, it worked fine. However, we needed to write scripts to translate our Oracle SQL into HSQL. Also, we had to override things like the data function in HSQL, and a few of our queries behaved unexpectedly in HSQL. We later abandoned it, as it became a lot of maintenance. It was about 10x faster than running against Oracle.
    Interestingly, we install oracle on all our developers machines too, tests run way faster locally, and developers feel way more comfortable experimenting with a local instance than with a shared instance.
    You can find the toplink in memory stuff at:
    http://toplink-in-mem.sourceforge.net/
    I provide all support for it. Doc is sketchy but I'm happy to guide you through stuff and help out where I can with it.
    - ted

  • Remove OID from database

    Hello,
    I am trying to install a new infrastructure instance without deleting the database. I need to delete or unregister the current internet directory from the metadata repository so that a new one can be installed. Does anyone know the steps to remove OID from a database cleanly so that it can be installed again to the same db? Any info would be greatly appreciated.
    Thanks
    Jordan

    Benny, use netmgr to do this. see Adding or Modifying Entries in the Directory Server http://download-west.oracle.com/docs/cd/B19306_01/network.102/b14212/config_concepts.htm#i487299
    regards,
    --Olaf                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Insert XML file into Relational database model without using XMLTYPE tables

    Dear all,
    How can I store a known complex XML file into an existing relational database WITHOUT using xmltypes in the database ?
    I read the article on DBMS_XMLSTORE. DBMS_XMLSTORE indeed partially bridges the gap between XML and RDBMS to a certain extent, namely for simply structured XML (canonical structure) and simple tables.
    However, when the XML structure will become arbitrary and rapidly evolving, surely there must be a way to map XML to a relational model more flexibly.
    We work in a java/Oracle10 environment that receives very large XML documents from an independent data management source. These files comply with an XML schema. That is all we know. Still, all these data must be inserted/updated daily in an existing relational model. Quite an assignment isn't it ?
    The database does and will not contain XMLTYPES, only plain RDBMS tables.
    Are you aware of a framework/product or tool to do what DBMS_XMLSTORE does but with any format of XML file ? If not, I am doomed.
    Constraints : Input via XML files defined by third party
    Storage : relational database model with hundreds of tables and thousands of existing queries that can not be touched. The model must not be altered.
    Target : get this XML into the database on a daily basis via an automated process.
    Cheers.
    Luc.

    Luc,
    your Doomed !
    If you would try something like DBMS_XMLSTORE, you probably would be into serious performance problems in your case, very fast, and it would be very difficult to manage.
    If you would use a little bit of XMLType stuff, you would be able to shred the data into the relational model very fast and controlable. Take it from me, I am one of those old geezers like Mr. Tom Kyte way beyond 40 years (still joking). No seriously. I started out as a classical PL/SQL, Forms Guy that switched after two years to become a "DBA 1.0" and Mr Codd and Mr Date were for years my biggest hero's. I have the utmost respect for Mr. Tom Kyte for all his efforts for bringing the concepts manual into the development world. Just to name some off the names that influenced me. But you will have to work with UNSTRUCTURED data (as Mr Date would call it). 80% of the data out there exists off it. Stuff like XMLTABLE and XML VIEWs bridge the gap between that unstructured world and the relational world. It is very doable to drag and drop an XML file into the XMLDB database into a XMLtype table and or for instance via FTP. From that point on it is in the database. From there you could move into relational tables via XMLTABLE methods or XML Views.
    You could see the described method as a filtering option so XML can be transformed into relational data. If you don't want any XML in your current database, then create an small Oracle database with XML DB installed (if doable 11.1.0.7 regarding the best performance --> all the new fast optimizer stuff etc). Use that database as a staging area that does all the XML shredding for you into relational components and ship the end result via database links and or materialized views or other probably known methodes into your relational database that isn't allowed to have XMLType.
    This way you would keep your realtional Oracle database clean and have the Oracle XML DB staging database do all the filtering and shredding into relational components.
    Throwing the XML DB option out off the window beforehand would be like replacing your Mercedes with a bicycle. With both you will be able to travel the distance from Paris to Rome, but it will take you a hell of lot longer.
    :-)

  • Word doc track changes in RH

    i need help working out why all my track changes in a word
    doc that i have uploaded to RH, show up!!!!
    its a real mess. the changes have been accepted so the word
    doc is clean but then they all show up in RH.
    any ideas?

    I don't want to revise the document after track changes. My point is; when I finished my track changes and sent the document off for the other person to see the changes I have made, they can't see any markings, just the revised text (that had been changed
    via track changes) It automatically changes without anyone excepting any changes I have made.

  • How to re-initialize a Unity Connection 8.5 DataBase for Lab testing

    I have built an Active/Active Unity Connection 8.5 Cluster on UCS for lab testing and I am wondering how to re-initialize the database (Clean out the database like a new install) without having to reinstall the application. I forgot to take a DRS of the newly installed server to revert back to.
    I have tried using Bulk Administration but subscribers and Callhandlers with Dependencies will not be touched by the Bulk Administration job. I need a quick way of getting the database back to fresh install status.

    Hi,
    Unlike Unity, there isn't a way to do this with UC without re-installing the entire application.
    Brad

  • Renaming the Oracle10gXE database name

    Hi,
    Oralce10gXe
    Linux Os
    How to rename the database name from 'XE' (default) to for eg. 'TEST' ?
    what are the exact steps for it ?
    Regards

    Hi,
    it is possible to change the SID of XE like any other Oracle edition. The easiest way to do so is to use the nid (newid) utility in $ORACLE_HOME/bin (you shutdown the database cleanly, open in mount state and then execute nid target=sys/password@xe dbname=newSID, then change the dbname in init/spfile, rename the init/spfile to contain the new sid, generate new password file using passwd, and open the database with resetlogs).
    However, I would not recommend renaming the XE database. I did that, and it has been a pain, since our developers changed the hardware several times, wished for new testing and development installations, and I had to play again and again with the changing of default oracle scripts. As with XE, you can run only one XE edition instance on the server (that does not change by renaming it). With that in mind, there are multiple oracle scripts with ORACLE_SID=XE hardcoded in them. Namely the /etc/init.d/oracle-xe, then all the scripts in $ORACLE_HOME/config/scripts related to starting/stopping and backing up the db, not forgetting the oracle_env.sh...
    Having the experience, I would simply recommend leaving the ORACLE_SID=XE, and changing the service_name, or the TNSNAMES.ora alias. That should be sufficient for the developers to "see a pretty name".
    Kind regards,
    Martin

  • How to remove a RAC database

    Hi Guys,
    We have some issues with ASM diskgroups and due to this the diskgroups holding a 2 node RAC DB are not mounted . We want to remove this database cleanly so that we can drop the troubled diskgroups and create new one.Unfortunately we dont have a backup of this database as it is not production.So how can we clean it from asm/crs - dbca needs the database to be up before it can be removed but becuase of asm issue and non-availability of backups,it can not be brought up!
    Any suggestions would be appreciated.
    Thanks.

    Hi,
    as database is mounted by RAC
    1. shutdown node2 (database and ASM instance)
    2. login to node1 , rman- drop database, which will drop all datafile, controlfile etc.
    3. srvclt remove datase -d <db_name> : to remove from CRS reposioty
    4. delete all parameter file or password file etc from $ORALCE_HOME/dbs
    5. remove detail from /etc/oratab
    5. drop ASM diskgroup
    For more detail following metalink notes.
    Note 239998.1: 10g RAC: How to Clean Up After a Failed CRS Install
    Note:311350.1 :How to cleanup ASM installation (RAC and Non-RAC)
    Note:251412.1 :How to Drop the Target Database Using RMAN
    Regards
    [email protected]

  • Database size Vs offiles folder size

    I made a gwcheck to check the user databases and if I add all user databases
    size, it gives me 130GB.
    But, only in offiles folder, there are more than 200GB.
    Does it mean that there are files with no owner?
    How can I maintain that? We need more free space on that volume, actually it
    has less than 15% of free space.
    Thankyou!
    FM

    Thankyou for ansering. Sorry by my late.
    We do:
    Daily on 7pm: Analyse/Fix Database | Structure/Index | Fix Problems | user,
    msg and doc Databases
    Saturday on 7am: Analyse/Fix Satabase | Contents/Collect Statistics | Fix
    Problems/Update user disk space | user, msg Databases
    Any other scheduled task is sugested?
    Thankyou again.
    FM
    "magic31" <[email protected]> escreveu na mensagem
    news:[email protected]..
    >
    > Fabio Martins;1985975 Wrote:
    >> I made a gwcheck to check the user databases and if I add all user
    >> databases
    >> size, it gives me 130GB.
    >> But, only in offiles folder, there are more than 200GB.
    >> Does it mean that there are files with no owner?
    >> How can I maintain that? We need more free space on that volume,
    >> actually it
    >> has less than 15% of free space.
    >>
    >> Thankyou!
    >>
    >> FM
    > Do you run regular/scheduled maintenance on you PO's?
    >
    > As a first: from ConsoleOne's PO maintenance, run an Analyse/fix with
    > options structure/index & fix selected (no content checks!)
    >
    > When that's done, rerun the Analyse/fix with only content & fix
    > selected (no structure/index checks)
    >
    > Then check the sizes again....
    >
    >
    > --
    > Novell Knowledge Partner (voluntary sysop)
    >
    > It ain't anything like Harry Potter.. but you gotta love the magic IT
    > can bring to this world.
    > ------------------------------------------------------------------------
    > magic31's Profile: http://forums.novell.com/member.php?userid=2303
    > View this thread: http://forums.novell.com/showthread.php?t=412934
    >

  • Database without logs - is it possible?

    Hi all!
    I have some data in berkeley db database under linux. My application collects some statistical information into database. .db file isn't big (near 100mb), but logs are growed to 15 gb during week of work. Now if i terminated my application with simply killing it, berkeley db engine loads database near 10-20 minutes, which is very long time for such small database.
    When i disabled logs, berkeley db simply didn't want to open incorrectly closed database.
    Can i make that berkeley db simply kill damaged records from database when i open it, without logs?
    Thanks and sorry for my english :)

    Hi,
    You can not remove the log files just like that. Log files may be removed at any time, as long as:
    * the log file is not involved in an active transaction.
    * a checkpoint has been written subsequent to the log file's creation.
    * the log file is not the only log file in the environment.
    You can run the standalone db_archive utility with the -d option, to remove any log files that are no longer needed at the time the command is executed.
    Related docs:
    Database and log file archival: http://www.oracle.com/technology/documentation/berkeley-db/db/ref/transapp/archival.html
    Log file removal: http://www.oracle.com/technology/documentation/berkeley-db/db/ref/transapp/logfile.html
    Bogdan Coman

  • Can you have two connections to database

    I am running two java programs which access the same database. One runs forever, and keeps the same connection up. The other program, lets say does a simple update, but it stalls and will never do the update. If I kill the first programs connection, then the second program can do the update. I am using the jdbc thin client. Is there a way I can have these both run and not stall?

    The short answer is, yes.
    There are two general approaches to the problem. The first approach is to use various locking, commit and rollback techniques. The basic premise is that if program A runs into a problem, the database drops A and reverts back to the previous state before responding to program B. This is highly dependent upon the database you're actually using, and probably beyond the scope of this forum.
    The second approach is to code your programs with a philosophy of "get in, do it, get out" as quickly as possible. In other words, do NOT leave a database connection open if you can avoid it.
    Essentially your program makes a connection, does a select query, copies the results into a data structure you define, and exits. Then you're free to play with the results as you see fit. When the time comes to do an update, you make the connection, perform the update, then immediately exit. If you need to test it, make another connection, do a select on that record, copy it, close the connection, then compare the copy with what you anticipate.
    There is the added benefit that whenever you close the connection, most databases will then do whatever commits need to be done, and release any deadlocks that may be in place.
    Your problem isn't really that your database cannot do two connections - its the fact that your database does not want to let two separate people make changes to the same table at the same time. There's an obvious reason for it and we can go into more detail if you want.
    But essentially, if you do each transaction as quickly as possible, and let the database clean up after each transaction, you will avoid most, if not all of the problems you're experiencing.
    Don't worry about the overhead of creating and closing connections. If it does become a problem, you can always use a "Connection Pool" (search for it on the forums if you want to know more). You just don't want to be in a position where the database is saying "I can't give you the info you want because I don't know if this other program is going to change any of it before giving control back to me."

  • Rebuild Database

    Sorry to all if this has already been covered, but I purused the Adobe LR forum and the FAQ/Tips over at LightroomExtra.com and cannot find an answer to my question...
    Like many of you I just upgraded to the official release of LR 1.0 and allowed the software to rebuild my library into the correct format. However, I have decided on a more logical tree structure for all my image files and, now, would like to totally wipe out everything LR knows and re-import all my files. I use "by ref" for all importing, btw. On Mac, can I simply delete ~/pictures/Lightroom? Or, how do I wipe the database clean?
    Thank you for any help provided.

    Hi Eric,
    Not to appear too dense... The library I would delete is: ~/picutures/Lightroom/Lightroom Database.lrdb? And, I suppose I could also remove Lightroom Metadata.lrdata and Lightroom Previews.lrdata?
    Since I didn't use Lightroom throughout the beta period for any "real" work, I don't have any keywords, IPTC, etc, data to worry about saving.
    Thanks, again, for your help.
    AlanH

Maybe you are looking for

  • Need of Profit centre in Material Master

    Hello Experts, As of now we have not entered profit centre in material master. Is it necessary to maintain the profit centre in material for any scenario? If yes can you please explain me those scenarios. Thanks in Advance, Satya

  • 16hours to do the iCloud back up?!

    I Seldom do the iCloud backup as it always shown unexpdcted long hours but this time the back up process already took 16hours........and still not completed yet. Is this normal?

  • Fixed value assignment while creating Domain

    Hi All, While creating a Domain with data type 'CHAR', No. characters 13, in Value Range > Fixed Value i had given 'TEST_MATERIAL' but it only takes first 10 Characters (TEST_MATER). I wan to display all characters. how can I solve it. Points Assured

  • How is the 5th Gen iPod better then....

    How is the 5th Gen iPod better then the 4th Gen iPod, other then the obvious video ability? Is it really that much better while playing songs? Is it built well? How do you like it?

  • PSE 10 - distorted colors

    I am trying to open my .nef raw images using PSE 10, and whenever I open an image, it is shown with a weird yellowish/brownish hue, that makes it impossible to work on. When I open the same image using other programs, the colors are normal. I have us