Store full

I am a novice Weblogic user and I wrote some code to add messages to a queue - unfortunately a loop was involved and this appears to have filled the store, to the extent that the server will no longer start cleanly, see part of dos window below.
Can i clear out this data, delete and recreate the store (with exsisting example data) or do I have to reinstall the application?.
Many thanks Terry Bennett
<02-Dec-2008 17:18:59 o'clock GMT> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STANDBY>
<02-Dec-2008 17:18:59 o'clock GMT> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STARTING>
<02-Dec-2008 17:19:11 o'clock GMT> <Error> <Store> <BEA-280072> <JDBC store "exampleJDBCStore" failed to open table "examplesWLStore".
weblogic.store.io.jdbc.JDBCStoreException: [Store:280065]java.sql.SQLException: Database size exceeded the licensed limit of 30 MB. (server="examplesS
erver" store="exampleJDBCStore" table="examplesWLStore"):(Linked Cause, "java.sql.SQLException: Database size exceeded the licensed limit of 30 MB.")
at weblogic.store.io.jdbc.JDBCStoreIO.getTableOwnershipPhysical(JDBCStoreIO.java:2089)
at weblogic.store.io.jdbc.JDBCStoreIO.getTableOwnershipLogical(JDBCStoreIO.java:2127)
at weblogic.store.io.jdbc.JDBCStoreIO.open(JDBCStoreIO.java:379)
at weblogic.store.internal.PersistentStoreImpl.recoverStoreConnections(PersistentStoreImpl.java:332)
at weblogic.store.internal.PersistentStoreImpl.open(PersistentStoreImpl.java:323)
Truncated. see log file for complete stacktrace
java.sql.SQLException: Database size exceeded the licensed limit of 30 MB.
at com.pointbase.net.netJDBCPrimitives.handleResponse(Unknown Source)
at com.pointbase.net.netJDBCPrimitives.handlePrimitiveResponse(Unknown Source)
at com.pointbase.net.netJDBCPreparedStatement.executeUpdate(Unknown Source)
at weblogic.jdbc.wrapper.PreparedStatement.executeUpdate(PreparedStatement.java:159)
at weblogic.store.io.jdbc.ReservedConnection.fillInsertStatement(ReservedConnection.java:727)
Truncated. see log file for complete stacktrace
>
<02-Dec-2008 17:19:11 o'clock GMT> <Error> <Store> <BEA-280061> <The persistent store "exampleJDBCStore" could not be deployed: weblogic.store.io.jdbc
.JDBCStoreException: open failed
weblogic.store.io.jdbc.JDBCStoreException: open failed
at weblogic.store.io.jdbc.JDBCStoreIO.open(JDBCStoreIO.java:442)
at weblogic.store.internal.PersistentStoreImpl.recoverStoreConnections(PersistentStoreImpl.java:332)
at weblogic.store.internal.PersistentStoreImpl.open(PersistentStoreImpl.java:323)
at weblogic.store.admin.AdminHandler.activate(AdminHandler.java:135)
at weblogic.store.admin.JDBCAdminHandler.activate(JDBCAdminHandler.java:66)
Truncated. see log file for complete stacktrace
weblogic.store.io.jdbc.JDBCStoreException: [Store:280065]java.sql.SQLException: Database size exceeded the licensed limit of 30 MB. (server="examplesS
erver" store="exampleJDBCStore" table="examplesWLStore"):(Linked Cause, "java.sql.SQLException: Database size exceeded the licensed limit of 30 MB.")
at weblogic.store.io.jdbc.JDBCStoreIO.getTableOwnershipPhysical(JDBCStoreIO.java:2089)
at weblogic.store.io.jdbc.JDBCStoreIO.getTableOwnershipLogical(JDBCStoreIO.java:2127)
at weblogic.store.io.jdbc.JDBCStoreIO.open(JDBCStoreIO.java:379)
at weblogic.store.internal.PersistentStoreImpl.recoverStoreConnections(PersistentStoreImpl.java:332)
at weblogic.store.internal.PersistentStoreImpl.open(PersistentStoreImpl.java:323)
Truncated. see log file for complete stacktrace
java.sql.SQLException: Database size exceeded the licensed limit of 30 MB.
at com.pointbase.net.netJDBCPrimitives.handleResponse(Unknown Source)
at com.pointbase.net.netJDBCPrimitives.handlePrimitiveResponse(Unknown Source)
at com.pointbase.net.netJDBCPreparedStatement.executeUpdate(Unknown Source)
at weblogic.jdbc.wrapper.PreparedStatement.executeUpdate(PreparedStatement.java:159)
at weblogic.store.io.jdbc.ReservedConnection.fillInsertStatement(ReservedConnection.java:727)
Truncated. see log file for complete stacktrace
>
<02-Dec-2008 17:19:11 o'clock GMT> <Warning> <Management> <BEA-141197> <The deployment of exampleJDBCStore failed.
weblogic.management.DeploymentException:
at weblogic.store.admin.AdminHandler.activate(AdminHandler.java:138)
at weblogic.store.admin.JDBCAdminHandler.activate(JDBCAdminHandler.java:66)
at weblogic.management.utils.GenericManagedService.activateDeployment(GenericManagedService.java:239)
at weblogic.management.utils.GenericServiceManager.activateDeployment(GenericServiceManager.java:131)
at weblogic.management.internal.DeploymentHandlerHome.invokeHandlers(DeploymentHandlerHome.java:591)
Truncated. see log file for complete stacktrace
java.sql.SQLException: Database size exceeded the licensed limit of 30 MB.
at com.pointbase.net.netJDBCPrimitives.handleResponse(Unknown Source)
at com.pointbase.net.netJDBCPrimitives.handlePrimitiveResponse(Unknown Source)
at com.pointbase.net.netJDBCPreparedStatement.executeUpdate(Unknown Source)
at weblogic.jdbc.wrapper.PreparedStatement.executeUpdate(PreparedStatement.java:159)
at weblogic.store.io.jdbc.ReservedConnection.fillInsertStatement(ReservedConnection.java:727)
Truncated. see log file for complete stacktrace
>
<02-Dec-2008 17:19:12 o'clock GMT> <Error> <JMS> <BEA-040123> <Failed to start JMS Server "examplesJMSServer" due to weblogic.management.DeploymentExc
eption: The persistent store "exampleJDBCStore" does not exist.
weblogic.management.DeploymentException: The persistent store "exampleJDBCStore" does not exist
at weblogic.jms.deployer.BEAdminHandler.findPersistentStore(BEAdminHandler.java:304)
at weblogic.jms.deployer.BEAdminHandler.activate(BEAdminHandler.java:192)
at weblogic.management.utils.GenericManagedService.activateDeployment(GenericManagedService.java:239)

I think it is likely that you are using the Pointbase database as a JDBC store for your JMS messages. Pointbase has an evaluation license that comes with WLS that allows it to grow up to 30mb. Once it exceeds that limit, you need to get a license from DataMirror/IBM to keep using that database:
http://www-01.ibm.com/software/data/integration/dm/
Are you using the OOTB WebLogic example domain? If the messages in the store are not important for you, and you would rather just get the server up and running again, it's probably easier to use a file store instead of a JDBC store for your JMS messages. If you want to use a JDBC store, then you may want to consider a production quality database instead of the evaluation edition of Pointbase.
You could either create a new domain with the Configuration Wizard - easier
or edit your current domain to not use Pointbase - more advanced
So if you can tell us what your goal is
Whether or not the messages needs to saved
How to came to use this particular domain (created it yourself or from samples)
then we can give you some better advice.

Similar Messages

  • 6230i 'record store full' when it isn't

    When I download emails I get 'record store full' message. I have deleted all messages and all pitures. When I go to 'e-mail storage' it shows me that my inbox has zero messages, but it says that my inbox is 73% full! All other folders are empty. I have removed the card, sim, battery etc and restarted - no difference. Is this a hardware problem?
    PS This used to work fine till yesterday - I always use my phone to download emails.

    I have now solved the problem which suddenly apeared a few weeks ago. I backed up the entire phone using PC-suite having first saved all my contacts on the SIM card. (I don't trust the contacts management in PC-suite.)
    I then had the phone software upgraded by my local dealer. I now have: nokia 6230i V 03.88 vs before V 03.70, 28 jul 06 RM-72, GSM P1.1
    I then restored from the backup and got most of it back. Some minor problem with display etc settings. I also lost the email settings but I could download from my mail provider and it now works. The key seems to be that my inbox setting which before shoved 0 messages but 31% utilisation now is back to 0%.
    Cheers JATO

  • App store / full content?

    following the categories in the itunes store /app store (for iphone/kids/kids between 6-8) I can only find a very few apps/games ("the best new apps and games) the result: is about 28 att all. how can I get the full content of all available apps?
    thanks

    Yes, you do.

  • App Store - Full screen preview.

    How do I view an app screenshot in full screen? I've just updated to ios 6. I noticed you could tap the screenshot for a larger view in the previews but when I tap nothing happens.

    Has this feature been removed?

  • Full production refresh to development systems and Charm?

    Hi SDN Community,
      Where I work we have a policy of refreshing the development systems (ECC, CRM, APO, BI, SRM) on an annual basis with full production system data, HR data is subsequently wiped clean from the development system. 
    For each of our systems we have a development system, a QA system and a production system.
    The decision to do these full refreshes was taken to ensure that developers and configurers can do decent testing in the Dev environments - prior to transporting to our QA system - to ensure that we have a reasonably stable QA system. Prior to the refreshes - QA was considered too unstable by our business to be used as a good test platform - the full D refreshes solved this problem.
    We have recently installed SAP Charm (Change request management) which is a solution manager based product that manages the transportation of objects in change requests - attached to maintenance cycles.
    During our first development system refresh we discovered that all open Charm projects (maintenance cycles) were placed into an unuseable status due to the refresh. This means we had to rebuild all of the Charm projects after the refresh - which took a considerable amount of time + resources and delayed project work. We don't want to be in this situation again for our next refresh.
    SAPs response to us has been that they have no other customer that does full Development system refreshes - and also has Charm. I wonder if this is accurate, I think what SAP are saying is that they don't have many Charm customers... Is refreshing Development so uncommon?
    We are considering products like SAPs TDMS - and alternatives (eg gold client) - but most of these appear a little immature - for refreshing complex system landscapes - such as ours and keeping everything in synch after the refresh.
    My questions to SDN -
    What do you do - do you do full refreshes to your D systems?
    If not - what are the arguments against refreshing D systems (is this considered bad practice)?
    Are there any other Charm customers out there in a similar situation to us?
    How do you manage the stability of your QA environments and stop the transport into QA systems of untested changes - if your D systems are not similar to you production system
    (note we have some 300 active users in just our ECC development system)?
    Any suggestions / recommendations as to how to best proceed would be greatly appreciated.
    many thanks in advance,
    Julian Phillips
    Edited by: Julian Phillips on Nov 11, 2008 2:33 PM

    Thanks Naushad for your reply,
      I wonder if D refreshes are less common - more due to the additional hardware costs incurred (disk space needed to store full production data is usually pretty large) to do them - than due to avoiding disrupting development - probably a bit of both.
      I did not mention this - but in our refresh we do reimport all transports that we open prior to the refresh - which used to solve our problems - in terms of restoring active changes - but with SAPs Charm system - this approach does not work as the Charm projects are left in an inconsistent status.
    We are now leaning towards introducing a 4th set of systems into our landscape - so that we have the following platforms:
    Development --->  Pre QA  ---> QA  ---> Production
    This approach will allow us to keep both the Pre QA and QA systems refreshed every 6 months or so - and our new Dev system - we will refresh very infrequently (every 5 years perhaps) if at all. We will tie the Charm system to our new development - and this system will then not be impacted by refreshes. This approach means additional hardware cost - but as it happens we have a few spare boxes - so this maybe our best bet.
    The core reason for our refresh is so that we will be able to preserve the stability of our QA test platform - which for us is the critical factor here. If QA has poorly tested work in it - we run a higher risk of disrupting other testing and also of this poorly tested work reaching production.
    When we used to have a development system as you described and just the single QA box - we experienced frequent instability in our QA box - due to poorly tested working reaching it - and this delayed numerous projects.
    Does anyone else have a 4 box setup like this? Anyone else encountered this on an SAP project anywhere? Pros / Cons of this approach?
    //Julian
    Edited by: Julian Phillips on Nov 12, 2008 8:08 AM
    Edited by: Julian Phillips on Nov 12, 2008 8:18 AM

  • Can I restore cache files to full res?

    I somehow deleted a folder from my system months ago that I now need. I can't find the folder anywhere in my hard drive or backup, but I can find the cache files. Is there anything I can do with those?

    If I understand it correctly, if you have Bridge set to store full sized previews in cache, then it stores a JPG the full size of the image.  If you zoom in to '100%/actual pixel size' with the Bridge viewer, then you do indeed see a representation of a full res of any particular image.  But AFAIK you can only access the cache via Bridge.  You can't go in with My Documents, and pull out a single JPG.
    This also presupposes that the Bridge cache keeps a copy of deleted documents.  I can just imagine the outcry on sites like this, if it was discovered that Bridge was that sloppy with cache file managment! So I suspect that you are out of luck, in that respect at least.
    But you mention a back up, so the files were stored in two locations prior to being deleted?  That doubles the chance of a file recovery program finding them.  I'm guessing that you have tried the Recycle Bin, or whatever Mac systems use?

  • Storer: Error in parsing:Could not process

    ZENworks for Desktops 6.5 SP2
    Netware 6.5 SP2
    Sybase DB 8.0.2 (4339)
    ZENworks Agents (still) 4.01xxx
    ZENworks Inventory Service Console Massages:
    Starting the Upgrade service.
    The database has already been set to the Ready state.
    The Upgrade service is trying to free the resources.
    Stopping the upgrade service.
    Starting Storer Service
    Obtaining dbdir from service object: VOL1:\\Zenworks\\ScanDir\\DbDir
    Trying to connect to the database -> MW_DBA
    jdbc:sybase:Tds:xxx.xxx.xxx.xxx:2638?ServiceName=m gmtdb&JCONNECT_VERSION=4
    Successfully connected to the database
    Storer: Database is initialized.
    Storer: started storing 0008028F9BBE_1060751520000_75.STR (2686 bytes)
    Starting STRConverter service
    Storer: Successfully stored the information for
    CN=WKSTA2.OU=Workstations.OU=BO2.O=CORP time taken: 12486
    Storer: started storing 000BCDC3AF5E_1070456883000_81.STR (95778 bytes)
    Receiver Service Started
    Starting Selector Service
    Inventory Sync Service started
    Storer: Full scan being processed for CN=WKSTA1.OU=Workstations.OU=BO1.O=CORP
    Error in parsing:Could not process 000BCDC3AF5E_1070456883000_81.STR due to
    DB Operation failure..will retry
    TCP Receiver Service Started
    There are several workstations that send .STR files to the Inventory
    Service (Leaf->Root) that causes Storer in Root DB server to stop. It says
    "..will retry" but it never does that. So, I have to manually stop the Inv
    Service & db, remove the "corrupt" .STR file from the
    \ZENworks\ScanDir\DBDir\temp folder, and start the db and InvService again.
    Then the process continues until the next "corrupt" .STR file. They seem to
    be Full Scan files according to the size of them.
    Question: What can be wrong? How can make Storer just to skip those corrupt
    ..STR files?
    Below: Part of Storer debug log file from root db server.
    [11/11/05 09:34:28.208] ZENInv - Storer: Total Memory = 8126464 Free Memory
    = 5182912
    [11/11/05 09:34:28.208] ZENInv - Storer: Unlocking
    VOL1:\Zenworks\ScanDir\DbDir\temp\0008028F9BBE_106 0751520000_75.STR
    [11/11/05 09:34:28.801] ZENInv - Storer: Loading Storer test properties file
    [11/11/05 09:34:28.810] ZENInv - Storer: Storer: started storing
    000BCDC3AF5E_1070456883000_81.STR (95778 bytes)
    [11/11/05 09:34:28.848] ZENInv - Storer: dn:
    CN=WKSTA1.OU=Workstations.OU=BO1.O=CORP tree: TREE123
    [11/11/05 09:34:28.848] ZENInv - Storer: tree: TREE123wsdn:
    CN=WKSTA1.OU=Workstations.OU=BO1.O=CORPtime: 1131612609000
    [11/11/05 09:34:28.848] ZENInv - Storer: Initial WS statusrecord is found
    [11/11/05 09:34:28.848] ZENInv - Storer: got the status log
    [11/11/05 09:34:29.138] ZENInv - Storer: [FULL]DELETING ALL PRODUCTS
    [11/11/05 09:34:32.091] ZENInv - SendRec Common: entPushDir =
    VOL1:\Zenworks\ScanDir\EntPushDir
    [11/11/05 09:34:32.091] ZENInv - SendRec Common: entMergeDirD =
    VOL1:\Zenworks\ScanDir\EntMergeDir
    [11/11/05 09:34:32.091] ZENInv - SendRec Common: dbDirD =
    VOL1:\Zenworks\ScanDir\DbDir
    [11/11/05 09:34:32.091] ZENInv - SendRec Common: serverName = SERVER01
    [11/11/05 09:34:32.091] ZENInv - SendRec Common: serviceDN =
    CN=SERVER01_ZenInvService.O=CORP
    [11/11/05 09:34:32.092] ZENInv - SendRec Common: treeName = TREE123
    [11/11/05 09:34:32.092] ZENInv - SendRec Common: hasSSD = false
    [11/11/05 09:34:32.092] ZENInv - SendRec Common: hasISD = true
    [11/11/05 09:34:32.092] ZENInv - SendRec Common: hasESD = true
    [11/11/05 09:34:32.092] ZENInv - SendRec Common: hasDB = true
    [11/11/05 09:34:32.092] ZENInv - SendRec Common: securityDir =
    SYS:\PUBLIC\ZENWORKS\WMINV\PROPERTIES
    [11/11/05 09:34:32.109] Service Manager: start(ServiceDataAccessor,
    String[]) not found in
    'com.novell.zenworks.desktop.inventory.selector.Se lectorServiceInit'
    [11/11/05 09:34:32.162] ZENInv - Selector: Selector Services Started
    Successfully
    [11/11/05 09:34:32.164] Service Manager: start(ServiceDataAccessor,
    String[]) not found in
    'com.novell.zenworks.common.inventory.scancollecto r.ScanCollector'
    [11/11/05 09:34:32.184] ZENInv - Selector: Selector StrFileDelay Not Set
    [11/11/05 09:34:32.185] ZENInv - Selector: Selector Code Profiling disabled
    [11/11/05 09:34:32.276] ZENInv - IFS Server: zenInvScanCollector:
    FileServiceController: Startup Properties: {chunksize=4096,
    lockfactory=com.novell.zenworks.common.inventory.i fs.utils.MemoryFileLockFactory,
    lockseed=ScanSelectorLock, transfers=100,
    rootdirectory=VOL1:\Zenworks\ScanDir, timeout=60000,
    servicename=zenInvScanCollector, portnumber=0}
    [11/11/05 09:34:32.429] ZENInv - CascadedBaseTime Server:
    zenInvCascadeBaseTimeService: CBTServiceController: Startup Properties:
    {basetime=Sat Jan 01 00:05:09 EET 2005,
    servicename=zenInvCascadeBaseTimeService, portnumber=0}
    [11/11/05 09:34:32.436] Service Manager: start(ServiceDataAccessor,
    String[]) not found in
    'com.novell.zenworks.desktop.inventory.InvSyncServ ice.ManagableSyncService'
    [11/11/05 09:34:32.457] ZENInv - Inventory Sync Service: SyncService thread
    started
    [11/11/05 09:34:32.466] ZENInv - Inventory Sync Service: NEW
    SyncServiceTable Constructor Invoked
    [11/11/05 09:34:32.466] ZENInv - Inventory Sync Service: Creating-Verifying
    Serialize-Deserialize Location VOL1:\Zenworks\ScanDir\stable\
    [11/11/05 09:34:32.467] ZENInv - Inventory Sync Service: Checking for
    VOL1:\Zenworks\ScanDir\stable\
    [11/11/05 09:34:32.469] Service Manager: start(ServiceDataAccessor,
    String[]) not found in
    'com.novell.zenworks3x.desktop.inventory.senderrec eiver.control.ReceiverServiceInit'
    [11/11/05 09:34:32.472] ZENInv - Inventory Sync Service: synchTableDir
    exists. Check wether this is a directory or File
    [11/11/05 09:34:32.474] ZENInv - Inventory Sync Service: Directory
    ExistsVOL1:\Zenworks\ScanDir\stable\
    [11/11/05 09:34:32.478] ZENInv - Inventory Sync Service: Directory
    Existence ConfirmedVOL1:\Zenworks\ScanDir\stable\
    [11/11/05 09:34:32.478] ZENInv - Inventory Sync Service:
    Serialize-Deserialize File VOL1:\Zenworks\ScanDir\stable\STABLE.SER
    [11/11/05 09:34:32.478] ZENInv - Inventory Sync Service: Initializing
    SyncServiceTable
    [11/11/05 09:34:32.478] ZENInv - Inventory Sync Service: SynchTable Does
    not Exist
    [11/11/05 09:34:32.478] ZENInv - Inventory Sync Service: Attempting to Load
    SynchTable From Serialized File
    [11/11/05 09:34:32.479] ZENInv - Inventory Sync Service: DeSerializing
    hashTable FromVOL1:\Zenworks\ScanDir\stable\STABLE.SER
    [11/11/05 09:34:32.480] ZENInv - Inventory Sync Service: DeSerializing
    SyncService HashTable
    [11/11/05 09:34:32.483] ZENInv - Inventory Sync Service: SynchTable Loaded
    Sucessfully From Serialized File
    [11/11/05 09:34:32.487] ZENInv - IFS Server: zeninvReceiverService:
    FileServiceController: Startup Properties: {chunksize=4096, transfers=100,
    rootdirectory=VOL1:\Zenworks\ScanDir\EntPushDir\Zi pDir, timeout=60000,
    servicename=zeninvReceiverService, portnumber=0}
    [11/11/05 09:34:38.169] ZENInv - Storer: Products=379 Sw_Times = 2361 379 0
    0 1354 379 0 0 944 379 0 0 TotalTime=8983
    [11/11/05 09:34:40.136] ZENInv - Storer: ws deletetime : 1774
    [11/11/05 09:34:40.435] ZENInv - Storer: Some Database Exception
    com.novell.zenworks.desktop.inventory.storer.Datab aseException: ASA Error
    -194: No primary key value for foreign key 'id$' in table 't$LockTable'
    at
    com.novell.zenworks.desktop.inventory.storer.Datab aseOperator.connectEx(DatabaseOperator.java:1164)
    at
    com.novell.zenworks.desktop.inventory.storer.Datab aseOperator.reTryExecute(DatabaseOperator.java:122 7)
    at
    com.novell.zenworks.desktop.inventory.storer.Datab aseOperator.updateLockTable(DatabaseOperator.java: 6130)
    at
    com.novell.zenworks.desktop.inventory.storer.Parse .writeToDB(Parse.java:2360)
    at com.novell.zenworks.desktop.inventory.storer.Parse .parse(Parse.java:4113)
    at
    com.novell.zenworks.desktop.inventory.storer.MainT hread.run(MainThread.java:976)
    [11/11/05 09:34:40.440] ZENInv - Storer: DatabaseException:DB operation
    failed..could not process 000BCDC3AF5E_1070456883000_81.STR due to
    com.novell.zenworks.desktop.inventory.storer.Datab aseException: ASA Error
    -194: No primary key value for foreign key 'id$' in table 't$LockTable'
    at
    com.novell.zenworks.desktop.inventory.storer.Datab aseOperator.connectEx(DatabaseOperator.java:1164)
    at
    com.novell.zenworks.desktop.inventory.storer.Datab aseOperator.reTryExecute(DatabaseOperator.java:122 7)
    at
    com.novell.zenworks.desktop.inventory.storer.Datab aseOperator.updateLockTable(DatabaseOperator.java: 6130)
    at
    com.novell.zenworks.desktop.inventory.storer.Parse .writeToDB(Parse.java:2360)
    at com.novell.zenworks.desktop.inventory.storer.Parse .parse(Parse.java:4113)
    at
    com.novell.zenworks.desktop.inventory.storer.MainT hread.run(MainThread.java:976)
    [11/11/05 09:34:40.444] ZENInv - Storer: MainThread-1 position:
    com.novell.zenworks.desktop.inventory.storer.Datab aseException: ASA Error
    -194: No primary key value for foreign key 'id$' in table 't$LockTable'
    at
    com.novell.zenworks.desktop.inventory.storer.Datab aseOperator.connectEx(DatabaseOperator.java:1164)
    at
    com.novell.zenworks.desktop.inventory.storer.Datab aseOperator.reTryExecute(DatabaseOperator.java:122 7)
    at
    com.novell.zenworks.desktop.inventory.storer.Datab aseOperator.updateLockTable(DatabaseOperator.java: 6130)
    at
    com.novell.zenworks.desktop.inventory.storer.Parse .writeToDB(Parse.java:2360)
    at com.novell.zenworks.desktop.inventory.storer.Parse .parse(Parse.java:4113)
    at
    com.novell.zenworks.desktop.inventory.storer.MainT hread.run(MainThread.java:976)
    [11/11/05 09:34:40.448] ZENInv - Status Reporting: Messages are written
    into XML file for DN=CN=SERVER01_ZenInvService.O=CORP
    [11/11/05 09:34:40.485] ZENInv - Status Reporting: Number of records to add
    are: 1 for DN=CN=SERVER01_ZenInvService.O=CORP
    [11/11/05 09:34:40.520] ZENInv - Status Reporting: Adding record 0 for
    DN=CN=SERVER01_ZenInvService.O=CORP
    [11/11/05 09:34:40.661] ZENInv - Status Reporting: Number of modified
    records are: 0 for DN=CN=SERVER01_ZenInvService.O=CORP
    [11/11/05 09:34:40.661] ZENInv - Storer: MainThread-2 position:
    [11/11/05 09:34:42.136] ZENInv - Selector: Getting ServerConfig HashTable
    [11/11/05 09:34:42.136] ZENInv - Selector: Getting InvServiceObj from HashTable
    [11/11/05 09:34:42.136] ZENInv - Selector: Getting NDSTree from ServiceObject
    [11/11/05 09:34:42.136] ZENInv - Selector: NDSTree=null
    [11/11/05 09:34:42.136] ZENInv - Selector: Getting InventoryServiceDN from
    ServiceObject
    [11/11/05 09:34:42.136] ZENInv - Selector:
    InventoryServiceDN=CN=SERVER01_ZenInvService.O=COR P
    [11/11/05 09:34:42.136] ZENInv - Selector: Getting ScanDir from ServiceObject
    [11/11/05 09:34:42.136] ZENInv - Selector: ScanDir=VOL1:\Zenworks\ScanDir
    [11/11/05 09:34:42.137] ZENInv - Selector: NEW SyncServiceTable Constructor
    Invoked
    [11/11/05 09:34:42.137] ZENInv - Selector: Creating-Verifying
    Serialize-Deserialize Location VOL1:\Zenworks\ScanDir\stable\
    [11/11/05 09:34:42.137] ZENInv - Selector: Checking for
    VOL1:\Zenworks\ScanDir\stable\
    [11/11/05 09:34:42.137] ZENInv - Selector: synchTableDir exists. Check
    wether this is a directory or File
    [11/11/05 09:34:42.138] ZENInv - Selector: Directory
    ExistsVOL1:\Zenworks\ScanDir\stable\
    [11/11/05 09:34:42.138] ZENInv - Selector: Directory Existence
    ConfirmedVOL1:\Zenworks\ScanDir\stable\
    [11/11/05 09:34:42.138] ZENInv - Selector: Serialize-Deserialize File
    VOL1:\Zenworks\ScanDir\stable\STABLE.SER
    [11/11/05 09:34:42.138] ZENInv - Selector: Initializing SyncServiceTable
    [11/11/05 09:34:42.138] ZENInv - Selector: Will Use the existing
    SyncServiceTable
    [11/11/05 09:34:42.138] ZENInv - Selector: Getting hasDatabase status from
    ServiceObject
    [11/11/05 09:34:42.138] ZENInv - Selector: hasDatabase is true from
    ServiceObject
    [11/11/05 09:34:42.138] ZENInv - Selector: Getting isStandAlone status from
    ServiceObject
    [11/11/05 09:34:42.138] ZENInv - Selector: isStandAlone is true from
    ServiceObject
    [11/11/05 09:34:42.139] ZENInv - Selector: ConvDir VOL1:\Zenworks\ScanDir\conv\
    [11/11/05 09:34:42.139] ZENInv - Selector: ConvDir exists. Check wether
    this is a directory or File
    [11/11/05 09:34:42.139] ZENInv - Selector: VOL1:\Zenworks\ScanDir
    [11/11/05 09:34:42.139] ZENInv - Selector: VOL1:\Zenworks\ScanDir\DbDir
    [11/11/05 09:34:42.139] ZENInv - Selector:
    [11/11/05 09:34:42.139] ZENInv - Selector: Getting SELECTOR_STORER Synch Object
    [11/11/05 09:34:42.139] ZENInv - Selector: Getting SELECTOR_COLLECTOR Synch
    Object
    [11/11/05 09:34:42.139] ZENInv - Selector: Getting SELECTOR_CONVERTER Synch
    Object
    [11/11/05 09:34:42.140] ZENInv - Selector: Getting CONVERTER_SELECTOR Synch
    Object
    [11/11/05 09:34:42.140] ZENInv - Selector: Getting SYNCHSERVICE_SELECTOR
    Synch Object
    [11/11/05 09:34:42.442] ZENInv - TCPReceiver: cascadingBaseTime = 1104530709000
    [11/11/05 09:34:42.442] ZENInv - TCPReceiver: entPushDir =
    VOL1:\Zenworks\ScanDir\EntPushDir
    [11/11/05 09:34:42.442] ZENInv - TCPReceiver: serverName = SERVER01
    [11/11/05 09:34:42.442] ZENInv - TCPReceiver: serviceDN =
    CN=SERVER01_ZenInvService.O=CORP
    [11/11/05 09:34:42.442] ZENInv - TCPReceiver: treeName = TREE123
    [11/11/05 09:34:42.443] ZENInv - TCPReceiver: hasDB = true
    [11/11/05 09:34:42.483] ZENInv - TCPReceiver: Receiver Started without CLUSTER
    [11/11/05 09:34:42.484] ZENInv - TCPReceiver: Receiver Binds to Port Number
    : 65432
    [11/11/05 09:34:42.486] Service Manager: start(ServiceDataAccessor,
    String[]) not found in
    'com.novell.zenworks.common.inventory.dictionaryup date.provider.DictProvider'
    [11/11/05 09:34:42.514] ZENInv - IFS Server: zenInvDictProvider:
    FileServiceController: Startup Properties: {chunksize=4096, transfers=100,
    rootdirectory=VOL1:\ZENWORKS\Inv\server\DictDir, timeout=60000,
    servicename=zenInvDictProvider, portnumber=0}
    [11/11/05 09:34:42.542] Service Manager: start(ServiceDataAccessor,
    String[]) not found in
    'com.novell.zenworks.common.inventory.dictionaryup date.consumer.DictConsumer'
    [11/11/05 09:34:42.859] ZENInv - Dictionary Consumer:
    DictConsumerUtility::getUpdatePolicyDN: getDictionaryUpdatePolicy returned
    attribs.returnValue = 0
    [11/11/05 09:34:42.859] ZENInv - Dictionary Consumer:
    DictConsumerService::DictDownloadThread::run: UpdatePolicyNotFoundException.
    com.novell.zenworks.common.inventory.dictionaryupd ate.consumer.UpdatePolicyNotFoundException
    at
    com.novell.zenworks.common.inventory.dictionaryupd ate.consumer.DictConsumerUtility.getUpdatePolicyDN (DictConsumerUtility.java:237)
    at
    com.novell.zenworks.common.inventory.dictionaryupd ate.consumer.DictConsumerService$DictDownloadThrea d.setUpdatePolicyAttribs(DictConsumerService.java: 688)
    at
    com.novell.zenworks.common.inventory.dictionaryupd ate.consumer.DictConsumerService$DictDownloadThrea d.getFileClientProperties(DictConsumerService.java :616)
    at
    com.novell.zenworks.common.inventory.dictionaryupd ate.consumer.DictConsumerService$DictDownloadThrea d.transferFiles(DictConsumerService.java:429)
    at
    com.novell.zenworks.common.inventory.dictionaryupd ate.consumer.DictConsumerService$DictDownloadThrea d.run(DictConsumerService.java:211)
    [11/11/05 09:34:42.862] ZENInv - Status Reporting: Messages are written
    into XML file for DN=CN=SERVER01_ZenInvService.O=CORP
    [11/11/05 09:34:42.955] ZENInv - Status Reporting: Number of records to add
    are: 1 for DN=CN=SERVER01_ZenInvService.O=CORP
    [11/11/05 09:34:42.989] ZENInv - Status Reporting: Adding record 0 for
    DN=CN=SERVER01_ZenInvService.O=CORP
    [11/11/05 09:34:43.132] ZENInv - Status Reporting: Number of modified
    records are: 0 for DN=CN=SERVER01_ZenInvService.O=CORP
    [11/11/05 09:34:43.134] ZENInv - Dictionary Consumer:
    DictConsumerService::FileDownloadListener::downloa dFailed.
    [11/11/05 09:39:25.639] Service Manager: Stopping Service Server
    Configuration Service
    [11/11/05 09:39:25.640] Service Manager: Service Server Configuration
    Service stopped successfully
    [11/11/05 09:39:25.645] Service Manager: Stopping Service Dictionary
    Consumer Service
    [11/11/05 09:39:25.645] Service Manager: Service Dictionary Consumer
    Service stopped successfully
    [11/11/05 09:39:25.652] Service Manager: Stopping Service TCPReceiver Service
    [11/11/05 09:39:25.656] Service Manager: Service TCPReceiver Service
    stopped successfully
    [11/11/05 09:39:25.659] Service Manager: Stopping Service STRConverter Service
    [11/11/05 09:39:25.969] ZENInv - STRConverter: STRConverter service is stopped
    [11/11/05 09:39:25.969] Service Manager: Service STRConverter Service
    stopped successfully
    [11/11/05 09:39:25.975] Service Manager: Stopping Service Selector Service
    [11/11/05 09:39:28.894] ZENInv - Selector: Selector Will Now Serialize
    SynchTable[Stop Slector Invoked]
    [11/11/05 09:39:28.894] ZENInv - Selector: Serializing hashTable
    ToVOL1:\Zenworks\ScanDir\stable\STABLE.SER
    [11/11/05 09:39:28.896] ZENInv - Selector: Selector Services are stopped -
    Exiting
    [11/11/05 09:39:28.896] ZENInv - Selector: STOP_PENDING message sent to
    StatusChangeListener
    [11/11/05 09:39:28.896] ZENInv - Selector: STOPPED message sent to
    StatusChangeListener
    [11/11/05 09:39:28.896] ZENInv - Selector: Selector Services Stopped
    [11/11/05 09:39:28.896] Service Manager: Service Selector Service stopped
    successfully
    [11/11/05 09:39:28.900] Service Manager: Stopping Service Scan Collector
    Service
    [11/11/05 09:39:28.923] Service Manager: Service Scan Collector Service
    stopped successfully
    [11/11/05 09:39:28.928] Service Manager: Stopping Service Receiver Service
    [11/11/05 09:39:29.001] Service Manager: Service Receiver Service stopped
    successfully
    [11/11/05 09:39:29.002] Service Manager: Stopping Service InventorySync
    Scheduler Service
    [11/11/05 09:39:29.002] Service Manager: Service InventorySync Scheduler
    Service stopped successfully
    [11/11/05 09:39:29.009] Service Manager: Stopping Service Storer Service
    [11/11/05 09:39:29.009] Service Manager: Service Storer Service stopped
    successfully
    [11/11/05 09:39:29.016] Service Manager: Stopping Service InvDBSync Service
    [11/11/05 09:39:29.016] ZENInv - Inventory Sync Service: Cleanup()
    operation completed
    [11/11/05 09:39:29.016] Service Manager: Service InvDBSync Service stopped
    successfully
    [11/11/05 09:39:29.022] Service Manager: Stopping Service Dictionary
    Provider Service
    [11/11/05 09:39:29.050] Service Manager: Service Dictionary Provider
    Service stopped successfully

    > On Fri, 11 Nov 2005 13:23:13 GMT, [email protected] wrote:
    >
    > > Storer: Full scan being processed for
    CN=WKSTA1.OU=Workstations.OU=BO1.O=CORP
    > > Error in parsing:Could not process 000BCDC3AF5E_1070456883000_81.STR due to
    > > DB Operation failure..will retry
    > > TCP Receiver Service Started
    >
    > it COULD be
    > http://support.novell.com/cgi-bin/se...i?10099394.htm
    > --
    >
    >
    > Marcus Breiden
    >
    > Please change -- to - to mail me.
    > The content of this mail is my private and personal opinion.
    > http://www.edu-magic.net
    Marcus, thanx for the quick answer (the weekend is over, back to work...),
    but the fact of the TID does not match. Anyway, I think I should still try
    that fix (have to try something...). Why those programs are not public? Is
    there a way to make Storer to skip bad .STR files? That would be handy in
    the future if any new problems occurs. What do ya think, should I try to
    remove those problem workstations from the root database? Would that
    correct (I mean delete and then recreate) the faulty tables in database if
    the TID is right?
    P.S. How do ya check if the Sybase DB is healthy, I mean the the way you
    check DS with DSRepair...?
    -tommi-

  • Upgraded from ZFD3.2 SP3 - ZFD 6.5 SP1, can't store inventory

    I've recently upgraded my test server from ZFD 3.2 SP3 running on Netware
    OES (NW 6.5.3 w/eDir 8.7.3.5). Everything appears to be working except
    inventory. When it goes to store the .STR file, it give the following
    error - Storer: started storing ....
    Error in parsing xxxx.str due to other exception.
    Storer: Full scan will be initiated.
    We have tried it from a XP SP1 workstation running Novell Client 4.90 SP2
    and from a workstion with both Novell Client 4.90 SP2 and the ZFD Mgmt
    Agent. Both give the same results.
    Have recreated the workstation inventory policy and deleted all workstation
    objects. Newly imported workstations still give the error.
    Inventory is not getting stored in the database. Any ideas?
    Kathy

    dmvkrn,
    > error - Storer: started storing ....
    > Error in parsing xxxx.str due to other exception.
    > Storer: Full scan will be initiated.
    can you post a large portion of the log file?
    Jared
    Novell Support Forums SysOp
    Systems Analyst with Data Technique, INC.
    Using XanaNews 1.16.5.2 as a News Reader
    Novell eDirectory Services 1.6 Billion users worldwide
    81% 0f Fortune 500 use Novell's eDirectory
    Novell Desktop Management 40 Million users worldwide

  • Why Do Verizon Store Personnel ask for phone no., last 4 of SS and password?

    On Friday  Dec. 20, 2013, I went in to a Verizon Premium Retailer in Glendale, CA to explore a smart phone that would do well for an active person.  A sturdy phone.  I was asked for my phone no., social security last 4 digits and  the password for my account.  All this in a store full of people close by to hear.   I think the sales routine for store personnel should be first to explore what the customer wants and collect info privately.  She refused to help me unless I called Verizon for my password.  The password could be something a customer enters on a keypad so its kept confidential.   My visit to this store resulted in my walking out because the other person working there told me I could not speak with them as I was speaking.  I just wanted to explore possibilities of a phone and service.  I was told I wasn't behaving right for her.  I was shocked at this rude come back to me when the other girl was entering in my drivers license info from my ID.  I asked for my ID back upon being spoken to this way.  The girl delayed in giving it to me. I then left, saying nothing except have a nice day.  She then yelled at me Merry Christmas and I said the same (not yelling) back to her as I exited.  It was a very strange customer service experience at this store.  This staff is not trained to be customer friendly and find solutions to customers who come in to spend money in their store.  I'm sure I'm not the only person who would question giving up all this personal data.  Why so much ?
    I can only think that she didn't like my frustration with the several steps to get into my account and didn't want to move on to show me a brochure or a sheet of paper with the basics.  She was not going to help me with any information on Verizon Wireless services unless I gave up all this personal information so everyone could hear it.  Wrong.  She had no business telling me about my behavior.  I was just asking for information on available phones and plans.  It was an awful experience to be spoken to that way by this person.  I'm further concerned by the fact that my driver's license info has now likely been compromised. 
    Is it my appearance she didn't like?  Her customer service to me was going to be her way or no way. 
    I want the manager to know about this and I want an apology.  I still do not have my questions answered about a new phone and service.

    cindyholly,
    We appreciate your feedback. I understand you want to keep your account information secure and we definitely want that as well; we make every effort to do this. You are asked to verify this information to ensure you are the account holder or the authorized member on the account. This is done to prevent unauthorized changes to your account and to ensure you are eligible for the new smartphone you have been thinking about. I can assure none of the information provided has been compromised.
    However, this does not mean you should have had a bad experience, we truly apologize for that. We strive to provide you only with the best customer service and I regret to see this was not the case. We can definitely submit this feedback to ensure it’s addressed. What store was this at?
    Lastly, I want to make sure all your questions are answered. Here are our new plans http://vz.to/1hX6IBf and our top of the line new phones http://vz.to/19dnbMP . Please let us know what questions you have.  
    AdaS_VZW
    Follow us on Twitter at @VZWSupport 

  • Can I store and retrieve .html documents to iPad memory card?

    I have not purchased an iPad yet. I am trying to see if it will fit my business needs.
    I want to use the iPad for business, so that I can demo web sites that I build, to potential customers. I know, I won't always been in range of a wireless signal. Can I store the .HTML files to the memory card, so that I can call them up and browse them using Safari, where they could be accessed and look the same as if I was accessing the web real-time?
    I also own an iMac.

    The simple answer is maybe. The iPad natively reads HTML files that are email attachments. Thorough Good Reader you can read HTML and safari web archive files.
    The problems is going to be in displaying images that are online if you're not connected. There are some site builder apps for the iPad. You can search "HTML" in the apps store. Look carefully. Most offer a safari prieview but might not store images locally.
    A possible option is an app like Offline Pages (Freeware) that can store full pages and images for offline browsing.

  • How to get All Users from OID LDAP

    Hi all,
    I have Oracle Internet Directory(OID) and have created the users in it manually.
    Now I want to extract all the users from OID. How can I get Users from OID??
    Any response will be appritiated. If some one could show me demo code for that I shall be greatful to you.
    Thanks and reagards
    Pravy

    hi,
    the notes from metalink:
    bgards
    elvis
    Doc ID: Note:276688.1
    Subject: How to copy (export/import) the Portal database schemas of IAS 9.0.4 to another database
    Type: BULLETIN
    Status: PUBLISHED
    Content Type: TEXT/X-HTML
    Creation Date: 18-JUN-2004
    Last Revision Date: 05-AUG-2005
    How to copy (export/import) Portal database schemas of IAS 9.0.4 to another database
    Note 276688.1
    Download scripts Unix: Attachment 276688.1:1
    Download Perl scripts (Unix/NT) :Attachment 276688.1:2
    This article is being delivered in Draft form and may contain errors. Please use the MetaLink "Feedback" button to advise Oracle of any issues related to this article.
    HISTORY
    Version 1.0 : 24-JUN-2004: creation
    Version 1.1 : 25-JUN-2004: added a link to download the scripts from Metalink
    Version 1.2 : 29-JUN-2004: Import script: Intermedia indexes are recreated. Imported jobs are reassigned to Portal. ptlconfig replaces ptlasst.
    Version 1.3 : 09-JUL-2004: Additional updates. Usage of iasconfig.xml. Need only 3 environment variables to import.
    Version 1.4 : 18-AUG-2004: Remark about 9.2.0.5 and 10.1.0.2 database
    Version 1.5 : 26-AUG-2004: Duplicate job id
    Version 1.6 : 29-NOV-2004: Remark about WWC-44131 and WWSBR_DOC_CTX_54
    Version 1.7 : 07-JAN-2005: Attached perl scripts (for NT/Unix) at the end of the note
    Version 1.8 : 12-MAY-2005: added a work-around for the WWSTO_SESS_FK1 issue
    Version 1.9 : 07-JUL-2005: logoff trigger and 9.0.1 database export, import in 10g database
    Version 1.10: 05-AUG-2005: reference to the 10.1.2 note
    PURPOSE
    This document explains how to copy a Portal database schema from a database to another database.
    It allows restoring the Portal repository and the OID security associated with Portal.
    It can be used to go in production by copying physically a database from a development portal to a production environment and avoid to use the export/import utilities of Portal.
    This note:
    uses the export/import on the database level
    allows the export/import to be done between different platforms
    The script are Unix based and for the BASH shell. They can be adapted for other platforms.
    For the persons familiar with this technics in Portal 9.0.2, there is a list of the main differences with Portal 9.0.2 at the end of the note.
    These scripts are based on the experience of a lot of persons in Portal 902.
    The scripts are attached to the note. Download them here: Attachment 276688.1:1 : exp_schema_904.zip
    A new version of the script was written in Perl. You can also download them, here: Attachment 276688.1:2 : exp_schema_904_v2.zip. They do exactly the same than the bash ones. But they have the advantage of working on all platforms.
    SCOPE & APPLICATION
    This document is intented for Portal administrators. For using this note, you need basic DBA skills.
    This notes is for Portal 9.0.4.x only. The notes for Portal 9.0.2 are :
    Note 228516.1 : How to copy (export/import) Portal database schemas of IAS 9.0.2 to another database
    Note 217187.1 : How to restore a cold backup of a Portal IAS 9.0.2 on another machine
    The note for Portal 10.1.2 is:
    Note 330391.1 : How to copy (export/import) Portal database schemas of IAS 10.1.2 to another databaseMethod
    The method that we will follow in the document is the following one:
    Export:
    - export of the 4 portal schemas of a database (DEV / development)
    - export the LDAP OID users and groups (optional)
    Install a new machine with fresh IAS installation (PROD / production)
    Import:
    - delete the new and empty portal schema on PROD
    - import the schemas in the production database in place of the deleted schemas
    - import the LDAP OID users and groups (optional)
    - modify the configuration such that the infrastructure uses the portal repository of the backup
    - modify the configuration such that the portal repository uses the OID, webcache and SSO of the new infrastructure
    The export and the import are divided in several steps. All of these steps are included in 2 sample scripts:
    export : exp_portal_schema.sh
    import : imp_portal_schema.sh
    In the 2 scripts, all the steps are runned in one shot. It is just an example. Depending of the configuration and circonstance, all the steps can be runned independently.
    Convention
    Development (DEV) is the name of the machine where resides the copied database
    Production (PROD) is the name of the machine where the database is copied
    Prerequisite
    Some prerequisite first.
    A. Environment variables
    To run the import/export, you will need 3 environment variables. In the given scripts, they are defined in 'portal_env.sh'
    SYS_PASSWORD - the password of user sys in the Portal database
    IAS_PASSWORD - the password of IAS
    ORACLE_HOME - the ORACLE_HOME of the midtier
    The rest of the settings are found automatically by reading the iasconfig.xml file and querying the OID. It is done in 'portal_automatic_env.sh'. I wish to write a note on iasconfig.xml and the way to transform it in usefull environment variables. But it is not done yet. In the meanwhile, you can read the old 902 doc, that explains the meaning of most variables :
    < Note 223438.1 : Shell script to find your portal passwords, settings and place them in environment variables on Unix >
    B. Definition: Cutter database
    A 'Cutter Database' is the term used to designate a Database created by RepCA or OUI and that contains all the schemas used by a IAS 9.0.4 infrastructure. Even if in most cases, several schemas are not used.
    In Portal 9.0.4, the option to install only the portal repository in an empty database has been removed. It has been replaced by RepCA, a tool that creates an infrastructure database. Inside all the infrastucture database schemas, there are the portal schemas.
    This does not stop people to use 2 databases for running portal. One for OID and one for Portal. But in comparison with Portal 9.0.2, all schemas exist in both databases even if some are not used.
    The main idea of Cutter database is to have only 1 database type. And in the future, simplify the upgrades of customer installation
    For an installation where Portal and OID/SSO are in 2 separate databases, it looks like this
    Portal 9.0.2 Portal 9.0.4
    Infrastructure database
    (INFRA_SID)
    The infrastructure contains:
    - OID (used)
    - OEM (used)
    - Single Sign-on / orasso (used)
    - Portal (not used)
    The infrastructure contains:
    - OID (used)
    - OEM (used)
    - Single Sign-on / orasso (used)
    - Portal (not used)
    Portal database
    (PORTAL_SID)
    The custom Portal database contains:
    - Portal (used)
    The custom Portal database (is also an infrastructure):
    - OID (not used)
    - OEM (not used)
    - Single Sign-on / orasso (not used)
    - Portal (used)
    Whatever, the note will suppose there is only one single database. But it works also for 2 databases installation like the one explained above.
    C. Directory structure.
    The sample scripts given inside this note will be explained in the next paragraphs. But first, the scripts are done to use a directory structure that helps to classify the files.
    Here is a list of important files used during the process of export/import:
    File Name
    Description
    exp_portal_schema.sh
    Sample script that exports all the data needed from a development machine
    imp_portal_schema.sh
    Sample script that import all the data into a production machine
    portal_env.sh
    Script that defines the env variable specific to your system (to configure)
    portal_automatic_env.sh
    Helper script to get all the rest of the Portal settings automatically
    xsl
    Directory containing all the XSL files (helper scripts)
    del_authpassword.xsl
    Helper script to remove the authpassword tags in the DSML files
    portal_env_unix.sql
    Helper script to get Portal settings from the iasconfig.xml file
    exp_data
    Directory containing all the exported data
    portal_exp.dmp
    export on the database level of the portal, portal_app, ... database schemas
    iasconfig.xml
    copy the name of iasconfig.xml of the midtier of DEV. Used to get the hostname and port of Webcache
    portal_users.xml
    export from LDAP of the OID users used by Portal (optional)
    portal_groups.xml export from LDAP of the OID groups used by Portal (optional)
    imp_log
    Directory containing several spool and logs files generated during the import
    import.log Log file generated when running the imp command
    ptlconfig.log
    Log generated by ptlconfig when rewiring portal to the infrastructure.
    Some other spool files.
    D. Known limitations
    The scripts given in this note have the following known limitations:
    It does not copy the data stored in the SSO schema: external applications definitions and the passwords stored for them.
    See in the post steps: SSO migration to know how to do.
    The ssomig command resides in the Infrastructure Oracle home. And all commands of Portal in the Midtier home. And practically, these 2 Oracle homes are most of the time not on the same machine. This is the reason.
    The export of the users in OID exports from the default user location:
    ldapsearch .... -b "cn=users,dc=domain,dc=com"
    This is not 100% correct. The users are by default stored in something like "cn=users,dc=domain,dc=com". So, if the users are stored in the default location, it works. But if this location (user install base) is customized, it does not work.
    The reason is that such settings means that the LDAP most of the time highly customized. And I prefer that the administrator to copy the real LDAP himself. The right command will probably depend of the customer case. So, I prefered not to take the risk..
    orclCommonNicknameAttribute must match in the Target and Source OID .
    The orclCommonNicknameAttribute must match on both the source and target OID. By default this attribute is set to "uid", so if this has been changed, it must be changed in both systems.
    Reference Note 282698.1
    Migration of custom Java portlets.
    The script migrates all the data of Portal stored in the database. If you have custom java portlet deployed in your development machine, you will need to copy them in the production system.
    Step 1 - Export in Development (DEV)
    To export a full Portal installation to another machine, you need to follow 3 steps:
    Export at the database level the portal schemas + related schemas
    Get the midtier hostname and port of DEV
    Export of the users and groups with LDAPSEARCH in 2 XML files
    A script combining all the steps is available here.
    A. Export the 4 portals schemas (DEV)
    You need to export 3 types of database schemas:
    The 4 portal schemas created by default by the portal installation :
    portal,
    portal_app,
    portal_demo,
    portal_public
    The schemas where your custom database portlets / providers resides (if any)
    - The custom schemas you have created for storing your portlet / provider code
    The schemas where your custom tables resides. (if any)
    - Your custom schemas accessed by portal and containing only data (tables, views ...)
    You can get an approximate list of the schemas: default portal schemas (1) and database portlets schemas (2) with this query.
    SELECT USERNAME, DEFAULT_TABLESPACE, TEMPORARY_TABLESPACE
    FROM DBA_USERS
    WHERE USERNAME IN (user, user||'_PUBLIC', user||'_DEMO', user||'_APP')
    OR USERNAME IN (SELECT DISTINCT OWNER FROM WWAPP_APPLICATION$ WHERE NAME != 'WWV_SYSTEM');
    It still misses your custom schemas containing data only (3).
    We will export the 4 schemas and your custom ones in an export file with the user sys.
    Please, use a command like this one
    exp userid="'sys/change_on_install@dev as sysdba'" file=portal_exp.dmp grants=y log=portal_exp.log owner=(portal,portal_app,portal_demo,portal_public)The result is a dump file: 'portal_exp.dmp'. If you are using a database 9.2.0.5 or 10.1.0.2, the database of the exp/imp dump file has changed. Please read this.
    B. Hostname and port
    For the URL to access the portal, you need the 2 following infos to run the script 'imp_portal_schema.sh below :
    Webcache hostname
    Webcache listen port
    These values are contained in the iasconfig.xml file of the midtier.
    iasconfig.xml
    <IASConfig XSDVersion="1.0">
    <IASInstance Name="ias904.dev.dev_domain.com" Host="dev.dev_domain.com" Version="9.0.4">
    <OIDComponent AdminPassword="@BfgIaXrX1jYsifcgEhwxciglM+pXod0dNw==" AdminDN="cn=orcladmin" SSLEnabled="false" LDAPPort="3060"/>
    <WebCacheComponent AdminPort="4037" ListenPort="7782" InvalidationPort="4038" InvalidationUsername="invalidator" InvalidationPassword="@BR9LXXoXbvW1iH/IEFb2rqBrxSu11LuSdg==" SSLEnabled="false"/>
    <EMComponent ConsoleHTTPPort="1813" SSLEnabled="false"/>
    </IASInstance>
    <PortalInstance DADLocation="/pls/portal" SchemaUsername="portal" SchemaPassword="@BR9LXXoXbvW1c5ZkK8t3KJJivRb0Uus9og==" ConnectString="cn=asdb,cn=oraclecontext">
    <WebCacheDependency ContainerType="IASInstance" Name="ias904.dev.dev_domain.com"/>
    <OIDDependency ContainerType="IASInstance" Name="ias904.dev.dev_domain.com"/>
    <EMDependency ContainerType="IASInstance" Name="ias904.dev.dev_domain.com"/>
    </PortalInstance>
    </IASConfig>
    It corresponds to a portal URL like this:
    http://dev.dev_domain.com:7782/pls/portalThe script exp_portal_schema.sh copy the iasconfig.xml file in the exp_data directory.
    C. Export the security: users and groups (optional)
    If you use other Single Sing-On uses than the portal user, you probably need to restore the full security, the users and groups stored in OID on the production machine. 5 steps need to be executed for this operation:
    Export the OID entries with LDAPSEARCH
    Before to import, change the domain in the generated file (optional)
    Before to import, remove the 'authpassword' attributes from the generated files
    Import them with LDAPADD
    Update the GUID/DN of the groups in portal tables
    Part 1 - LDAPSEARCH
    The typical commands to do this operation look like this:
    ldapsearch -h $OID_HOSTNAME -p $OID_PORT -X -b "cn=portal.040127.1384,cn=groups,dc=dev_domain,dc=com" -s sub "objectclass=*" > portal_group.xml
    ldapsearch -h $OID_HOSTNAME -p $OID_PORT -X -D "cn=orcladmin" -w $IAS_PASSWORD -b "cn=users,dc=dev_domain,dc=com" -s sub "objectclass=inetorgperson" > portal_users.xmlTake care about the following points
    The groups are stored in a LDAP directory containing the date of installation
    ( in this example: portal.040127.1384,cn=groups,dc=dev_domain,dc=com )
    If the domain of dev and prod is different, the exported files contains the name of the development domain in the form of 'dc=dev_domain,dc=com' in a lot of place. The domain name needs to be replaced by the production domain name everywhere in the files.
    Ldapsearch uses the option '- X '. It it to export to DSML files (XML). It avoids a problem related with common LDAP files, LDIF files. LDIF files are wrapped at 78 characters. The wrapping to 78 characters make difficult to change the domain name contained in the LDIF files. XML files are not wrapped and do not have this problem.
    A sample script to export the 2 XML files is given here in : step 3 - export the users and groups (optional) of the export script.
    Part 2 : change the domain in the DSML files
    If the domain of dev and prod is different, the exported files contains the name of the development domain in the form of 'dc=dev_domain,dc=com' in a lot of place. The domain name need to be replaced by the production domain name everywhere in the files.
    To do this, we can use these commands:
    cat exp_data/portal_groups.xml | sed -e "s/$DEV_DN/$PROD_DN/" > imp_log/portal_groups.xml
    cat exp_data/portal_users.xml | sed -e "s/$DEV_DN/$PROD_DN/" > imp_log/temp_users.xml
    Part 3 : Remove the authpassword attribute
    The export of all attributes from the all users has also exported an automatically generated attribute in OID called 'authpassword'.
    'authpassword' is a list automatically generated passwords for several types of application. But mostly, it can not be imported. Also, there is no option in ldapsearch (that I know) that allows removing an attribute. In place of giving to the ldapsearch command the list of all the attributes that is very long, without 'authpassword', we will remove the attribute after the export.
    For that we will use the fact that the DSML files are XML files. There is a XSLT in the Oracle IAS, in the executable '$ORACLE_HOME/bin/xml'. XSLT is a standard specification of the internet consortium W3C to transform a XML file with the help of a XSL file.
    Here is the XSL file to remove the authpassword tag.
    del_autpassword.xsl
    <!--
    File : del_authpassword.xsl
    Version : 1.0
    Author : mgueury
    Description:
    Remove the authpassword from the DSML files
    -->
    <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
    <xml:output method="xml"/>
    <xsl:template match="*|@*|node()">
    <xsl:copy>
    <xsl:apply-templates select="*|@*|node()"/>
    </xsl:copy>
    </xsl:template>
    <xsl:template match="attr">
    <xsl:choose>
    <xsl:when test="@name='authpassword;oid'">
    </xsl:when>
    <xsl:when test="@name='authpassword;orclcommonpwd'">
    </xsl:when>
    <xsl:otherwise>
    <xsl:copy>
    <xsl:apply-templates select="*|@*|node()"/>
    </xsl:copy>
    </xsl:otherwise>
    </xsl:choose>
    </xsl:template>
    </xsl:stylesheet>
    And the command to make the transfomation:
    xml -f -s del_authpassword.xsl -o imp_log/portal_users.xml imp_log/temp_users.xmlWhere :
    imp_log/portal_users.xml is the final file without authpassword tags
    imp_log/temp_users.xml is the input file with the authpassword tags that can not be imported.
    Part 4 : LDAPADD
    The typical commands to do this operation look like this:
    ldapadd -h $OID_HOSTNAME -p $OID_PORT -D "cn=orcladmin" -w $IAS_PASSWORD -c -X portal_group.xml
    ldapadd -h $OID_HOSTNAME -p $OID_PORT -D "cn=orcladmin" -w $IAS_PASSWORD -c -X portal_users.xmlTake care about the following points
    Ldapadd uses the option ' -c '. Existing users/groups are generating an error. The option -c allows continuing and ignoring these errors. Whatever, the errors should be checked to see if it is just existing entries.
    A sample script to import the 2 XML files given in the step 5 - import the users and groups (optional) of the import script.
    Part 5 : Update the GUID/DN
    In Portal 9.0.4, the update of the GUID is taken care by PTLCONFIG during the import. (Import step 7)
    D. Example script for export
    Here is a example script that combines the 3 steps.
    Depending of you need, you will :
    or execute all the steps
    or just execute the 1rst one (export of the database users). It will be enough you just want to login with the portal user on the production instance.
    if your portal repository resides in a database 9.2.0.5 or 10.1.0.2, please read this
    you can download all the scripts here, Attachment 276688.1:1
    Do not forget to modify the script to your need and mostly add the list of users like explained in point A above.
    exp_portal_schema.sh
    # BASH Script : exp_portal_schema.sh
    # Version : 1.3
    # Portal : 9.0.4.0
    # History :
    # mgueury - creation
    # Description:
    # This script export a portal dump file from a dev instance
    # -------------------------- Environment variables --------------------------
    . portal_env.sh
    # In case you do not use portal_env.sh you have to define all the variables
    # For exporting the dump file only.
    # export SYS_PASSWORD=change_on_install
    # export PORTAL_TNS=asdb
    # For the security (optional)
    # export IAS_PASSWORD=welcome1
    # export PORTAL_USER=portal
    # export PORTAL_PASSWORD=A1b2c3de
    # export OID_HOSTNAME=development.domain.com
    # export OID_PORT=3060
    # export OID_DOMAIN_DN=dc=`echo $OID_HOSTNAME | cut -d '.' -f2,3,4,5,6 --output-delimiter=',dc='`
    # ------------------------------ Help function -----------------------------------
    function press_any_key() {
    if [ $PRESS_ANY_KEY_AFTER_EACH_STEP = "Y" ]; then
    echo
    echo Press enter to continue
    read $ANY_KEY
    else
    echo
    fi
    echo "------------------------------- Export ------------------------------------"
    # create a directory for the export
    mkdir exp_data
    # copy the env variables in the log just in case
    export > exp_data/exp_env_variable.txt
    echo "--------------------- step 1 - export"
    # export the portal users, but take care to add:
    # - your users containing DB providers
    # - your users containing data (tables)
    exp userid="'sys/$SYS_PASSWORD@$PORTAL_TNS as sysdba'" file=exp_data/portal_exp.dmp grants=y log=exp_data/portal_exp.log owner=(portal,portal_app,portal_demo,portal_public)
    press_any_key
    echo "--------------------- step 2 - store iasconfig.xml file of the MIDTIER"
    cp $MIDTIER_ORACLE_HOME/portal/conf/iasconfig.xml exp_data
    press_any_key
    echo "--------------------- step 3 - export the users and groups (optional)"
    # Export the groups and users from OID in 2 XML files (not LDIF)
    # The OID groups of portal are stored in GROUP_INSTALL_BASE that depends
    # of the installation date.
    # For the user, I use the default place. If it does not work,
    # you can find the user place with:
    # > exec dbms_output.put_line(wwsec_oid.get_user_search_base);
    # Get the GROUP_INSTALL_BASE used in security export
    sqlplus $PORTAL_USER/$PORTAL_PASSWORD@$PORTAL_TNS <<IASDB
    set serveroutput on
    spool exp_data/group_base.log
    begin
    dbms_output.put_line(wwsec_oid.get_group_install_base);
    end;
    IASDB
    export GROUP_INSTALL_BASE=`grep cn= exp_data/group_base.log`
    echo '--- Exporting Groups'
    echo 'creating portal_groups.xml'
    ldapsearch -h $OID_HOSTNAME -p $OID_PORT -X -s sub -b "$GROUP_INSTALL_BASE" -s sub "objectclass=*" > exp_data/portal_groups.xml
    echo '--- Exporting Users'
    echo 'creating portal_users.xml'
    ldapsearch -h $OID_HOSTNAME -p $OID_PORT -D "cn=orcladmin" -w $IAS_PASSWORD -X -s sub -b "cn=users,$OID_DOMAIN_DN" -s sub "objectclass=inetorgperson" > exp_data/portal_users.xml
    The script is done to run from the midtier.
    Step 2 - Install IAS in a new machine (PROD)
    A. Installation
    This note does not distinguish if Portal is sharing the same database than Single-Sign On and OID. For simplicity, I will speak only about 1 database. But I could also create a second infrastructure database just for the portal repository. This way is better for production system, because the Portal repository is only product used in the 2nd database. Having 2 separate databases allows taking easily backup of the portal repository.
    On the production machine, you need to install a fresh install of IAS 9.0.4. Take care to use :
    the same IAS patchset 9.0.4.1, 9.0.4.2, ...on the middle-tier and infrastruture than in development
    and same characterset than in development (or UTF8)
    The result will be 2 ORACLE_HOMES and 1 infrastructure database:
    the ORACLE_HOME of the infrastructure (SID:infra904)
    the ORACLE_HOME of the midtier (SID:ias904)
    an infrastructure database (SID:asdb)
    The empty new Portal install should work fine before to go to the next step.
    B. About tablespaces (optional)
    The size of the tablespace of the production should match the one of the Developement machine. If not, the tablespace will autoextend. It is not really a concern, but it is slow. You should modify the tablespaces for to have as much space on prod and dev.
    Also, it is safer to check that there is enough free space on the hard disk to import in the database.
    To modify the tablespace size, you can use Oracle Entreprise Manager console,
    On Unix, . oraenv
    infra904oemapp dbastudio
    On NT Start/ Programs/ Oracle Application server - infra904 / Enterprise Manager Console
    Launch standalone
    Choose the portal database (typically asdb.domain.com)
    Connect with a DBA user, sys or system
    Click Storage/Tablespaces
    Change the size of the PORTAL, PORTAL_DOC, PORTAL_LOGS, PORTAL_IDX tablespaces
    C. Backup
    It could be a good idea to take a backup of the MIDTIER and INFRASTRUCTURE Oracle Homes at that point to allow retesting the import process if it fails for any reason as much as you want without needing to reinstall everything.
    Step 3 - Import in production (on PROD)
    The following script is a sample of an Unix script that combines all the steps to import a portal repository to the production machine.
    To import a portal reporistory and his users and group in OID, you need to do 8 things:
    Stop the midtier to avoid errors while dropping the portal schema
    SQL*Plus with Portal
    Drop the 4 default portal schemas
    Create the portal users with the same passwords than the just deleted users and give them grants (you need to create your own custom shemas too if you have some).
    Import the dump file
    Import the users and groups into OID (optional)
    SQL*Plus with SYS : Post import changes
    Recompile everything in the database
    Reassign the imported jobs to portal
    SQL*Plus with Portal : Post import changes
    Recreate the Portal intermedia indexes
    Correct an import errror on wwsrc_preference$
    Make additional post import changes, by updating some portal tables, and replacing the development hostname, port or domain by the production ones.
    Rewire the portal repository with ptlconfig -dad portal
    Restart the midtier
    Here is a sample script to do this on Unix. You will need to adapt the script to your needs.
    imp_portal_schema.sh
    # BASH Script : imp_portal_schema.sh
    # Version : 1.3
    # Portal : 9.0.4.0
    # History :
    # mgueury - creation
    # Description:
    # This script import a portal dump file and relink it with an
    # infrastructure.
    # Script to be started from the MIDTIER
    # -------------------------- Environment variables --------------------------
    . portal_env.sh
    # Development and Production machine hostname and port
    # Example
    # .._HOSTNAME machine.domain.com (name of the MIDTIER)
    # .._PORT 7782 (http port of the MIDTIER)
    # .._DN dc=domain,dc=com (domain name in a LDAP way)
    # These values can be determined automatically with the iasconfig.xml file of dev
    # and prod. But if you do not know or remember the dev hostname and port, this
    # query should find it.
    # > select name, http_url from wwpro_providers$ where http_url like 'http%'
    # These variables are used in the
    # > step 4 - security / import OID users and groups
    # > step 6 - post import changes (PORTAL)
    # Set the env variables of the DEV instance
    rm /tmp/iasconfig_env.sh
    xml -f -s xsl/portal_env_unix.xsl -o /tmp/iasconfig_env.sh exp_data/iasconfig.xml
    . /tmp/iasconfig_env.sh
    export DEV_HOSTNAME=$WEBCACHE_HOSTNAME
    export DEV_PORT=$WEBCACHE_LISTEN_PORT
    export DEV_DN=dc=`echo $OID_HOSTNAME | cut -d '.' -f2,3,4,5,6 --output-delimiter=',dc='`
    # Set the env variables of the PROD instance
    . portal_env.sh
    export PROD_HOSTNAME=$WEBCACHE_HOSTNAME
    export PROD_PORT=$WEBCACHE_LISTEN_PORT
    export PROD_DN=dc=`echo $OID_HOSTNAME | cut -d '.' -f2,3,4,5,6 --output-delimiter=',dc='`
    # ------------------------------ Help function -----------------------------------
    function press_any_key() {
    if [ $PRESS_ANY_KEY_AFTER_EACH_STEP = "Y" ]; then
    echo
    echo Press enter to continue
    read $ANY_KEY
    else
    echo
    fi
    echo "------------------------------- Import ------------------------------------"
    # create a directory for the logs
    mkdir imp_log
    # copy the env variables in the log just in case
    export > imp_log/imp_env_variable.txt
    echo "--------------------- step 1 - stop the midtier"
    # This step is needed to avoid most case of ORA-01940: user connected
    # when dropping the portal user
    $MIDTIER_ORACLE_HOME/opmn/bin/opmnctl stopall
    press_any_key
    echo "--------------------- step 2 - drop and create empty users"
    sqlplus "sys/$SYS_PASSWORD@$PORTAL_TNS as sysdba" <<IASDB
    spool imp_log/drop_create_user.log
    ---- Drop users
    -- Warning: You need to stop all SQL*Plus connection to the
    -- portal schema before that else the drop will give an
    -- ORA-01940: cannot drop a user that is currently connected
    drop user portal_public cascade;
    drop user portal_app cascade;
    drop user portal_demo cascade;
    drop user portal cascade;
    ---- Recreate the users and give them grants"
    -- The new users will have the same passwords as the users we just dropped
    -- above. Do not forget to add your exported custom users
    create user portal identified by $PORTAL_PASSWORD default tablespace portal;
    grant connect,resource,dba to portal;
    create user portal_app identified by $PORTAL_APP_PASSWORD default tablespace portal;
    grant connect,resource to portal_app;
    create user portal_demo identified by $PORTAL_DEMO_PASSWORD default tablespace portal;
    grant connect,resource to portal_demo;
    create user portal_public identified by $PORTAL_PUBLIC_PASSWORD default tablespace portal;
    grant connect,resource to portal_public;
    alter user portal_public grant connect through portal;
    start $MIDTIER_ORACLE_HOME/portal/admin/plsql/wwv/wdbigra.sql portal
    exit
    IASDB
    press_any_key
    echo "--------------------- step 3 - import"
    imp userid="'sys/$SYS_PASSWORD@$PORTAL_TNS as sysdba'" file=exp_data/portal_exp.dmp grants=y log=imp_log/import.log full=y
    press_any_key
    echo "--------------------- step 4 - import the OID users and groups (optional)"
    # Some errors will be raised when running the ldapadd because at least the
    # default entries will not be able to be inserted. Remove them from the
    # ldif file if you want to avoid them. Due to the flag '-c', ldapadd ignores
    # duplicate entries. Another more radical solution is to erase all the entries
    # of the users and groups in OID before to run the import.
    # Replace the domain name in the XML files.
    cat exp_data/portal_groups.xml | sed -e "s/$DEV_DN/$PROD_DN/" > imp_log/portal_groups.xml
    cat exp_data/portal_users.xml | sed -e "s/$DEV_DN/$PROD_DN/" > imp_log/temp_users.xml
    # Remove the authpassword attributes with a XSL stylesheet
    xml -f -s xsl/del_authpassword.xsl -o imp_log/portal_users.xml imp_log/temp_users.xml
    echo '--- Importing Groups'
    ldapadd -h $OID_HOSTNAME -p $OID_PORT -D "cn=orcladmin" -w $IAS_PASSWORD -c -X imp_log/portal_groups.xml -v
    echo '--- Importing Users'
    ldapadd -h $OID_HOSTNAME -p $OID_PORT -D "cn=orcladmin" -w $IAS_PASSWORD -c -X imp_log/portal_users.xml -v
    press_any_key
    echo "--------------------- step 5 - post import changes (SYS)"
    sqlplus "sys/$SYS_PASSWORD@$PORTAL_TNS as sysdba" <<IASDB
    spool imp_log/sys_post_changes.log
    ---- Recompile the invalid packages"
    -- On the midtier, the script utlrp is not present. This step
    -- uses a copy of it stored in patch/utlrp.sql
    select count(*) INVALID_OBJECT_BEFORE from all_objects where status='INVALID';
    start patch/utlrp.sql
    set lines 999
    select count(*) INVALID_OBJECT_AFTER from all_objects where status='INVALID';
    ---- Jobs
    -- Reassign the JOBS imported to PORTAL. After the import, they belong
    -- incorrectly to the user SYS.
    update dba_jobs set LOG_USER='PORTAL', PRIV_USER='PORTAL' where schema_user='PORTAL';
    commit;
    exit
    IASDB
    press_any_key
    echo "--------------------- step 6 - post import changes (PORTAL)"
    sqlplus $PORTAL_USER/$PORTAL_PASSWORD@$PORTAL_TNS <<IASDB
    set serveroutput on
    spool imp_log/portal_post_changes.log
    ---- Intermedia
    -- Recreate the portal indexes.
    -- inctxgrn.sql is missing from the 9040 CD-ROMS. This is the bug 3536937.
    -- Fixed in 9041. The missing script is contained in the downloadable zip file.
    start patch/inctxgrn.sql
    start $MIDTIER_ORACLE_HOME/portal/admin/plsql/wws/ctxcrind.sql
    ---- Import error
    alter table "WWSRC_PREFERENCE$" add constraint wwsrc_preference_pk
    primary key (subscriber_id, id)
    using index wwsrc_preference_idx1
    begin
    DBMS_RLS.ADD_POLICY ('', 'WWSRC_PREFERENCE$', 'WEBDB_VPD_POLICY',
    '', 'webdb_vpd_sec', 'select, insert, update, delete', TRUE,
    static_policy=>true);
    end ;
    ---- Modify tables with full URLs
    -- If the domain name of prod and dev are different, this step is really important.
    -- It modifies the portal tables that contains reference to the hostname or port
    -- of the development machine. (For more explanation: see Addional steps in the note)
    -- groups (dn)
    update wwsec_group$
    set dn=replace( dn, '$DEV_DN', '$PROD_DN' )
    update wwsec_group$
    set dn_hash = wwsec_api_private.get_dn_hash( dn )
    -- users (dn)
    update wwsec_person$
    set dn=replace( dn, '$DEV_DN', '$PROD_DN' )
    update wwsec_person$
    set dn_hash = wwsec_api_private.get_dn_hash( dn)
    -- subscriber
    update wwsub_model$
    set dn=replace( dn, '$DEV_DN', '$PROD_DN' ), GUID=':1'
    where dn like '%$DEV_DN%'
    -- preferences
    update wwpre_value$
    set varchar2_value=replace( varchar2_value, '$DEV_DN', '$PROD_DN' )
    where varchar2_value like '%$DEV_DN%'
    update wwpre_value$
    set varchar2_value=replace( varchar2_value, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
    where varchar2_value like '%$DEV_HOSTNAME:$DEV_PORT%'
    -- page url items
    update wwv_things
    set title_link=replace( title_link, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
    where title_link like '%$DEV_HOSTNAME:$DEV_PORT%'
    -- web providers
    update wwpro_providers$
    set http_url=replace( http_url, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
    where http_url like '%$DEV_HOSTNAME:$DEV_PORT%'
    -- html links created by the RTF editor inside text items
    update wwv_text
    set text=replace( text, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
    where text like '%$DEV_HOSTNAME:$DEV_PORT%'
    -- Portlet metadata nls: help URL
    update wwpro_portlet_metadata_nls$
    set help_url=replace( help_url, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
    where help_url like '%$DEV_HOSTNAME:$DEV_PORT%'
    -- URL items (There is a trigger on this table building absolute_url automatically)
    update wwsbr_url$
    set absolute_url=replace( absolute_url, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
    where absolute_url like '%$DEV_HOSTNAME:$DEV_PORT%'
    -- Things attributes
    update wwv_thingattributes
    set value=replace( value, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
    where value like '%$DEV_HOSTNAME:$DEV_PORT%'
    commit;
    exit
    IASDB
    press_any_key
    echo "--------------------- step 7 - ptlconfig"
    # Configure portal such that portal uses the infrastructure database
    cd $MIDTIER_ORACLE_HOME/portal/conf/
    ./ptlconfig -dad portal
    cd -
    mv $MIDTIER_ORACLE_HOME/portal/logs/ptlconfig.log imp_log
    press_any_key
    echo "--------------------- step 8 - restart the midtier"
    $MIDTIER_ORACLE_HOME/opmn/bin/opmnctl startall
    date
    Each step can generate his own errors due to a lot of factors. It is better to run the import step by step the first time.
    Do not forget to check the output of log files created during the various steps of the import:
    imp_log/drop_create_user.log
    Spool when dropping and recreating the portal users
    imp_log/import.log Import log file when importing the portal_exp.dmp file
    imp_log/sys_post_changes.log
    Spool when making post changes with SYS
    imp_log/portal_post_changes.log
    Spool when making post changes with PORTAL
    imp_log/ptlconfig.log
    Log file of ptconfig when rewiring the midtier
    Step 4 - Test
    A. Check the log files
    B. Test the website and see if it works fine.
    Step 5 - take a backup
    Take a backup of all ORACLE_HOME and DATABASES to prevent all hardware problems. You need to copy:
    All the files of the 2 ORACLE_HOME
    And all the database files.
    Step 6 - Additional steps
    Here are some additional steps.
    SSO external application ( that are part of the orasso schema and not imported yet )
    Page URL items ( they seems to store the full URL ) - included in imp_portal_schema.sh
    Web Providers ( the URL needs to be changed ) - included in imp_portal_schema.sh
    Text items edited with the RTF editor in IE and containing links - included in imp_portal_schema.sh
    Most of them are taken care by the "step 8 - post import changes". Except the first one.
    1. SSO import
    This script imports only Portal and the users/groups of OID. Not the list of the external application contained in the orasso user.
    In Portal 9.0.4, there is a script called SSOMIG that resides in $INFRA_ORACLE_HOME/sso/bin and allows to move :
    Definitions and user data for external applications
    Registration URLs and tokens for partner applications
    Connection information used by OracleAS Discoverer to access various data sources
    See:
    Oracle® Application Server Single Sign-On Administrator's Guide 10g (9.0.4) Part Number B10851-01
    14. Exporting and Importing Data
    2. Page items: the page URL items store the full URL.
    This is Bug 2661805 fixed in Portal 9.0.2.6.
    This following work-around is implemented in post import step of imp_portal_schema.sh
    -- page url items
    update wwv_things
    set title_link=replace( title_link, 'dev.dev_domain.com:7778', 'prod.prod_domain.com:7778' )
    where title_link like '%$DEV_HOSTNAME:$DEV_PORT%'
    2. Web Providers
    The URL to the Web providers needs also change. Like for the Page items, they contain the full path of the webserver.
    Or you can get the list of the URLs to change with this query
    select name, http_url from PORTAL.WWPRO_PROVIDERS$ where http_url like '%';
    This following work-around is implemented in post import step of imp_portal_schema.sh
    -- web providers
    update wwpro_providers$
    set http_url=replace( http_url, 'dev.dev_domain.com:7778', 'prod.prod_domain.com:7778' )
    where http_url like '%$DEV_HOSTNAME:$DEV_PORT%'
    4. The production and development machine do not share the same domain
    If the domain of the production and the development are not the same, the DN (name in LDAP) of all users needs to change.
    Let's say from
    dc=dev_domain,dc=com -> dc=prod_domain,dc=com
    1. before to upload the ldif files. All the strings in the 2 ldifs files that contain 'dc=dev_domain,dc=com', have to be replaced by 'dc=prod_domain,dc=com'
    2. in the wwsec_group$ and wwsec_person$ tables in portal, the DN need to change too.
    This following work-around is implemented in post import step of imp_portal_schema.sh
    -- groups (dn)
    update wwsec_group$
    set dn=replace( dn, 'dc=dev_domain,dc=com', 'dc=prod_domain,dc=com' )
    update wwsec_group$
    set dn_hash = wwsec_api_private.get_dn_hash( dn )
    -- users (dn)
    update wwsec_person$
    set dn=replace( dn, 'dc=dev_domain,dc=com', 'dc=prod_domain,dc=com' )
    update wwsec_person$
    set dn_hash = wwsec_api_private.get_dn_hash( dn)
    5. Text items with HTML links
    Sometimes people stores full URL inside their text items, it happens mostly when they use link with the RichText Editor in IE .
    This following work-around is implemented in post import step in imp_portal_schema.sh
    -- html links created by the RTF editor inside text items
    update wwv_text
    set text=replace( text, 'dev.dev_domain.com:7778', 'prod.prod_domain.com:7778' )
    where text like '%$DEV_HOSTNAME:$DEV_PORT%'
    6. OID Custom password policy
    It happens quite often that the people change the password policy of the OID server. The reason is that with the default policy, the password expires after 60 days. If so, do not forget to make the same changes in the new installation.
    PROBLEMS
    1. Import log has some errors
    A. EXP-00091 -Exporting questionable statistics
    You can ignore this error.
    B. IMP-00017 - WWSRC_PREFERENCE$
    When importing, there is one import error:
    IMP-00017: following statement failed with ORACLE error 921:
    "ALTER TABLE "WWSRC_PREFERENCE$" ADD "
    IMP-00003: ORACLE error 921 encountered
    ORA-00921: unexpected end of SQL commandThe primary key is not created. You can create it with this commmand
    in SQL*Plus with the user portal.. Then readd the missing VPD policy.
    alter table "WWSRC_PREFERENCE$" add constraint wwsrc_preference_pk
    primary key (subscriber_id, id)
    using index wwsrc_preference_idx1
    begin
    DBMS_RLS.ADD_POLICY ('', 'WWSRC_PREFERENCE$', 'WEBDB_VPD_POLICY',
    '', 'webdb_vpd_sec', 'select, insert, update, delete', TRUE,
    static_policy=>true);
    end ;
    Step 8 in the script "imp_portal_schema.sh" take care of this. This can also possibly be solved by the
    C. IMP-00017 - WWDAV$ASL
    . importing table "WWDAV$ASL"
    Note: table contains ROWID column, values may be obsolete 113 rows importedThis error is normal, the table really contains a ROWID column.
    D. IMP-00041 - Warning: object created with compilation warnings
    This error is normal too. The packages giving these error have
    dependencies on package not yet imported. A recompilation is done
    after the import.
    E. ldapadd error 'cannot add add entries containing authpasswords'
    # ldap_add: DSA is unwilling to perform
    # ldap_add: additional info: You cannot add entries containing authpasswords.
    "authpasswords" are automatically generated values from the real password of the user stored in userpassword. These values do not have to be exported from ldap.
    In the import script, I remove the additional tag with a XSL stylesheet 'del_authpassword.xsl'. See above.
    F. IMP-00017: WWSTO_SESSION$
    IMP-00017: following statement failed with ORACLE error 2298:
    "ALTER TABLE "WWSTO_SESSION$" ENABLE CONSTRAINT "WWSTO_SESS_FK1""
    IMP-00003: ORACLE error 2298 encountered
    ORA-02298: cannot validate (PORTAL.WWSTO_SESS_FK1) - parent keys not found
    Here is a work-around for the problem. I will normally integrate it in a next version of the scripts.
    SQL> delete from WWSTO_SESSION_DATA$;
    7690 rows deleted.
    SQL> delete from WWSTO_SESSION$;
    1073 rows deleted.
    SQL> commit;
    Commit complete.
    SQL> ALTER TABLE "WWSTO_SESSION$" ENABLE CONSTRAINT "WWSTO_SESS_FK1";
    Table altered.
    G. IMP-00017 - ORACLE error 1 - DBMS_JOB.ISUBMIT
    This error can appear during the import when the import database is not empty and is already customized for some reasons. For example, you export from an infrastructure and you import in a database with a lot of other programs that uses jobs. And unhappily the same job id.
    Due to the way the export/import of jobs is done, the jobs keeps their id after the import. And they may conflict.
    IMP-00017: following statement failed with ORACLE error 1: "BEGIN DBMS_JOB.ISUBMIT(JOB=>42,WHAT=>'begin execute immediate " "''begin wwutl_cache_sys.process_background_inval; end;'' ; exc" "eption when others then wwlog_api.log(p_domain=> ''utl'', " " p_subdomain=>''cache'', p_name=>''background'', " " p_action=>''process_background_inval'', p_information => ''E" "rror in process_background_inval ''|| sqlerrm);end;', NEXT_DATE=" ">TO_DATE('2004-08-19:17:32:16','YYYY-MM-DD:HH24:MI:SS'),INTERVAL=>'SYSDATE " "+ 60/(24*60)',NO_PARSE=>TRUE); END;"
    IMP-00003: ORACLE error 1 encountered ORA-00001: unique constraint (SYS.I_JOB_JOB) violated
    ORA-06512: at "SYS.DBMS_JOB", line 97 ORA-06512: at line 1
    Solutions:
    1. use a fresh installed database,
    2. Due that the jobs conflicting are different because it happens only in custom installation, there is no clear rule. But you can
    recreate the jobs lost after the import with other_ids
    and/or change the job id of the other program before to import. This type of commands can help you (you need to do it with SYS)
    select * from dba_jobs;
    update dba_jobs set job=99 where job=52;
    commit
    2. Import in a RAC environment
    Be aware of the Bug 2479882 when the portal database is in a RAC database.
    Bug 2479882 : NEEDED TO BOUNCE DB NODES AFTER INSTALLING PORTAL 9.0.2 IN RAC NODE3. Intermedia
    After importing a environment, the intermedia indexes are invalid. To correct the error you need to run in SQL*Plus with Portal
    start $MIDTIER_ORACLE_HOME/portal/admin/plsql/wws/inctxgrn.sql
    start $MIDTIER_ORACLE_HOME/portal/admin/plsql/wws/ctxcrind.sql
    But $MIDTIER_ORACLE_HOME/portal/admin/plsql/wws/inctxgrn.sql is missing in IAS 9.0.4.0. This is Bug 3536937. Fixed in 9041. The missing scripts are contained in the downloadable zip file (exp_schema904.zip : Attachment 276688.1:1 ), directory sql. This means that practically in 9040, you have to run
    start sql/inctxgrn.sql
    start $MIDTIER_ORACLE_HOME/portal/admin/plsql/wws/ctxcrind.sql
    In the import script, it is done in the step 6 - recreate Portal Intermedia indexes.
    You can not WA the problem without the scripts. Running ctxcrind.sql alone does not work. You will have this error:
    ORA-06510: PL/SQL: unhandled user-defined exception
    ORA-06512: at "PORTAL.WWERR_API_EXCEPTION", line 164
    ORA-06512: at "PORTAL.WWV_CONTEXT", line 1035
    ORA-06510: PL/SQL: unhandled user-defined exception
    ORA-06512: at "PORTAL.WWERR_API_EXCEPTION", line 164
    ORA-06512: at "PORTAL.WWV_CONTEXT", line 476
    ORA-06510: PL/SQL: unhandled user-defined exception
    ORA-20000: Oracle Text error:
    DRG-12603: CTXSYS does not own user datastore procedure: WWSBR_THING_CTX_69
    ORA-06512: at line 13
    4. ptlconfig
    If you try to run ptlconfig simply after an import you will get an error:
    Problem processing Portal instance: Configuring HTTP server settings : Installing cache data : SQL exception: ERROR: ORA-23421: job number 32 is not a job in the job queue
    This is because the import done by user SYS has imported the PORTAL jobs to the SYS schema in place of portal. The solution is to run
    update dba_jobs set LOG_USER='PORTAL', PRIV_USER='PORTAL' where schema_user='PORTAL';
    In the import script, it is done in the step 8 - post import changes.
    5. WWC-41417 - invalid credentials.
    When you try to login you get:
    Unexpected error encountered in wwsec_app_priv.process_signon (User-Defined Exception) (WWC-41417)
    An exception was raised when accessing the Oracle Internet Directory: 49: Invalid credentials
    Details
    Error:Operation: dbms_ldap.simple_bind_s
    OID host: machine.domain.com
    OID port number: 4032
    Entry DN: orclApplicationCommonName=PORTAL,cn=Portal,cn=Products,cn=OracleContext. (WWC-41743)Solution:
    - run secupoid.sql
    - rerun ptlconfig
    This problem has been seen after using ptlasst in place of ptlconfig.
    6. EXP-003 with a database 9.2.0.5 or 10.1.0.2
    In fact, the DB format of imp/exp has changed in 9.2.0.5 or 10.1.0.2. The EXP-3 error only occurs when the export from the 9.2.0.5.0 or 10.1.0.2.0 database is done with a lower release export utility, e.g. 9.2.0.4.0.
    Due to the way this note is written, the imp/exp utility used is the one of the midtier (9014), if your portal resides in a 9.2.0.5 database, it will not work. To work-around the problem, there are 2 solutions:
    Change the script so that it uses the exp and imp command of database.
    Make a change to the 9.2.0.5 or 10.1.0.2 database to make them compatible with previous version. The change is to modify a database internal view before to export/import the data.
    A work-around is given in Bug 3784697
    1. Make a note of the export definition of exu9tne from
    $OH/rdbms/admin/catexp.sql
    2. Copy this to a new file and add "UNION ALL select * from sys.exu9tneb" to the end of the definition
    3. Run this as sys against the DB to be exported.
    4. Export as required
    5. Put back the original definition of exu9tne
    eg: For 9204 the workaround view would be:
    CREATE OR REPLACE VIEW exu9tne (
    tsno, fileno, blockno, length) AS
    SELECT ts#, segfile#, segblock#, length
    FROM sys.uet$
    WHERE ext# = 1
    UNION ALL
    select * from sys.exu9tneb
    7. EXP-00006: INTERNAL INCONSISTENCY ERROR
    This is Bug 2906613.
    The work-around given in this bug is the following:
    - create the following view, connected as sys, before running export:
    CREATE OR REPLACE VIEW exu8con (
    objid, owner, ownerid, tname, type, cname,
    cno, condition, condlength, enabled, defer,
    sqlver, iname) AS
    SELECT o.obj#, u.name, c.owner#, o.name,
    decode(cd.type#, 11, 7, cd.type#),
    c.name, c.con#, cd.condition, cd.condlength,
    NVL(cd.enabled, 0), NVL(cd.defer, 0),
    sv.sql_version, NVL(oi.name, '')
    FROM sys.obj$ o, sys.user$ u, sys.con$ c,
    sys.cdef$ cd, sys.exu816sqv sv, sys.obj$ oi
    WHERE u.user# = c.owner# AND
    o.obj# = cd.obj# AND
    cd.con# = c.con# AND
    cd.spare1 = sv.version# (+) AND
    cd.enabled = oi.obj# (+) AND
    NOT EXISTS (
    SELECT owner, name
    FROM sys.noexp$ ne
    WHERE ne.owner = u.name AND
    ne.name = o.name AND
    ne.obj_type = 2)
    The modification of exu8con simply adds support for a constraint type that had not previously been supported by this view. There is no negative impact.
    8. WWSBR_DOC_CTX_54 is invalid
    After the recompilation of the package, one package remains invalid (in sys_post_changes.log):
    INVALID_OBJECT_AFTER
    1
    select owner, object_name from all_objects where status='INVALID'
    CTXSYS WWSBR_DOC_CTX_54
    CREATE OR REPLACE procedure WWSBR_DOC_CTX_54
    (rid in rowid, bilob in out NOCOPY blob)
    is begin PORTAL.WWSBR_CTX_PROCS.DOC_CTX(rid,bilob);end;
    This object is not used anymore by portal. The error can be ignored. The procedure can be removed too. This is Bug 3559731.
    9. You do not have permission to perform this operation. (WWC-44131)
    It seems that there are problems if
    - groups on the production machine are not residing in the default place in OID,
    - and that the group creation base and group search base where changed.
    After this, the cloning of the repository work without problem. But it seems that the command 'ptlconfig -dad portal' does not reset the GUID and DN of the groups correctly. I have not checked this yet.
    The solution seems to use the script given in the 9.0.2 Note 228516.1. And run group_sec.sql to reset all the DN and GUID in the copied instance.
    10. Invalid Java objects when exporting from a 9.x database and importing in a 10g database
    If you export from a 9.x database and import in a 10g database, after running utlrp.sql, 18 Java objects will be invalid.
    select object_name, object_type from user_objects where status='INVALID'
    SQL> /
    OBJECT_NAME OBJECT_TYPE
    /556ab159_Handler JAVA CLASS
    /41bf3951_HttpsURLConnection JAVA CLASS
    /ce2fa28e_ProviderManagerClien JAVA CLASS
    /c5b98d35_ServiceManagerClient JAVA CLASS
    /d77cf2ab_SOAPServlet JAVA CLASS
    /649bf254_JavaProvider JAVA CLASS
    /a9164b8b_SpProvider JAVA CLASS
    /2ee43ac9_StatefulEJBProvider JAVA CLASS
    /ad45acec_StatelessEJBProvider JAVA CLASS
    /da1c4a59_EntityEJBProvider JAVA CLASS
    /66fdac3e_OracleSOAPHTTPConnec JAVA CLASS
    /939c36f5_OracleSOAPHTTPConnec JAVA CLASS
    org/apache/soap/rpc/Call JAVA CLASS
    org/apache/soap/rpc/RPCMessage JAVA CLASS
    org/apache/soap/rpc/Response JAVA CLASS
    /198a7089_Message JAVA CLASS
    /2cffd799_ProviderGroupUtils JAVA CLASS
    /32ebb779_ProviderGroupMgrProx JAVA CLASS
    18 rows selected.
    This is a known issue. This can be solved by applying patch one of the following patch depending of your IAS version.
    Bug 3405173 - PORTAL 9.0.4.0.0 PATCH FOR 10G DB UPGRADE (FROM 9.0.X AND 9.2.X)
    Bug 4100409 - PORTAL 9.0.4.1.0 PATCH FOR 10G DB UPGRADE (FROM 9.0.X AND 9.2.X)
    Bug 4100417 - PORTAL 9.0.4.2.0 PATCH FOR 10G DB UPGRADE (FROM 9.0.X AND 9.2.X)
    11. Import : IMP-00003: ORACLE error 30510 encountered
    When importing Portal 9.0.4.x, it could be that the import of the database side produces an error ORA-30510.The new perl script work-around the issue in the portal_post_import.sql script. But not the BASH scripts. If you use the BASH scripts, after the import, please run this command manually in SQL*Plus logged as portal.
    ---- Import error 2 - ORA-30510 when importing
    CREATE OR REPLACE TRIGGER logoff_trigger
    before logoff on schema
    begin
    -- Call wwsec_oid.unbind to close open OID connections if any.
    wwsec_oid.unbind;
    exception
    when others then
    -- Ignore all the errors encountered while unbinding.
    null;
    end logoff_trigger;
    This is logged as <Bug;4458413>.
    12. Exporting from a 9.0.1 database and import in a 9.2.0.5+ or 10g DB
    It could be that when exporting from a 9.0.1 database to a 10g database that the java classes do not get compiled correctly. The following errors are seen
    ORA-29534: referenced object PORTAL.oracle/net/www/proto/https/HttpsURLConnection could not be resolved
    errors:: class oracle/net/www/proto/https/HttpsURLConnection
    ORA-29521: referenced name oracle/security/ssl/OracleSSLSocketFactoryImpl could not be found
    ORA-29521: referenced name oracle/security/ssl/OracleSSLSocketFactory could not be found
    In such a case, please apply the following patches after the import in the 10g database.
    Bug 3405173 PORTAL REPOS DB UPGRADE TO 10G: for Portal 9.0.4.0
    Bug 4100409 PORTAL REPOS DB UPGRADE TO 10G: for Portal 9.0.4.1
    Main Differences with Portal 9.0.2
    For the persons used to this technics in Portal 9.0.2, you could be interested to read the main differences with the same note for Portal 9.0.2
    Portal 9.0.2
    Portal 9.0.4
    Cutter database
    Portal 9.0.2 can be part of an infrastructure database or in a custom external database.
    In Portal 9.0.2, the portal schema is imported in an empty database.
    Portal 9.0.4 can only be installed in a 'Cutter database', a database created with RepCA or OUI containing always OID, DCM and so on...
    In Portal 9.0.4, the portal schema is imported in an 'Cutter database' (new)
    group_sec.sql
    group_sec.sql is used to correct the GUIDs of OID stored in Portal
    ptlconfig -dad portal -oid is used to correct the GUIDs of OID stored in Portal (new)
    1 script
    The import / export are divided by several steps with several scripts
    The import script is done in one step
    Additional steps are included in the script
    This requires to know the hostname and port of the original development machine. (new)
    Import
    The steps are:
    creation of an empty database
    creation of the users with password=username
    import
    The steps are:
    creation of an IAS 10g infrastructure DB (repca or OUI)
    deletion of new portal schemas (new)
    creation of the users with the same password than the schemas just dropped.
    import
    DAD
    The dad needed to be changed
    The passwords are not changed, the dad does not need to be changed.
    Bugs
    In portal 9.0.2, 2 bugs were workarounded by change_host.sh
    In Portal 9.0.4, some tables additional tables needs to be updated manually before to run ptlasst. This is #Bug:3762961#.
    export of LDAP
    The export is done in LDIF files. If the prod and the dev have different domain, it is quite difficult to change the domain name in these file due to the line wrapping at 78 characters.
    The export is done in XML files, in the DSML format (new). It is a lot easier to change the XML files if the domain name is different from PROD to DEV.
    Download
    You have to cut and paste the scripts
    The scripts are attached to the note. Just donwload them.
    Rewiring
    9.0.2 uses ptlasst.
    ptlasst.csh -mode MIDTIER -i custom -s $PORTAL_USER -sp $PORTAL_PASSWORD -c $PORTAL_HOSTNAME:$PORTAL_DB_PORT:$PORTAL_SERVICE_NAME -sdad $PORTAL_DAD -o orasso -op $ORASSO_PASSWORD -odad orasso -host $MIDTIER_HOSTNAME -port $MIDTIER_HTTP_PORT -ldap_h $INFRA_HOSTNAME -ldap_p $OID_PORT -ldap_w $IAS_PASSWORD -pwd $IAS_PASSWORD -sso_c $INFRA_HOSTNAME:$INFRA_DB_PORT:$INFRA_SERVICE_NAME -sso_h $INFRA_HOSTNAME -sso_p $INFRA_HTTP_PORT -ultrasearch -oh $MIDTIER_ORACLE_HOME -mc false -mi true -chost $MIDTIER_HOSTNAME -cport_i $WEBCACHE_INV_PORT -cport_a $WEBCACHE_ADM_PORT -wc_i_pwd $IAS_PASSWORD -emhost $INFRA_HOSTNAME -emport $EM_PORT -pa orasso_pa -pap $ORASSO_PA_PASSWORD -ps orasso_ps -pp $ORASSO_PS_PASSWORD -iasname $IAS_NAME -verbose -portal_only
    9.0.4 uses ptlconfig (new)
    ptlconfig -dad portal
    Environment variables
    A lot of environment variables are needed
    Just 3 environment variables are needed:
    - password of SYS
    - password of IAS,
    - ORACLE_HOME of the Midtier
    All the rest is found in iasconfig.xml and LDAP (new)
    TO DO
    - Check if the orclcommonapplication name fits SID.hostname
    - Check what gives the import of a portal30 upgraded schema inside a schema named portal
    - Explain how to copy the portal*.dbf files in place of export/import and the limitation of tra

  • Why does this class not read and write at same time???

    I had another thread where i was trying to combine two class files together
    http://forum.java.sun.com/thread.jspa?threadID=5146796
    I managed to do it myself but when i run the file it does not work right. If i write a file then try and open the file it says there are no records in the file, but if i close the gui down and open the file everything get read in as normal can anybody tell me why?
    import java.io.*;
    import java.awt.*;
    import java.awt.event.*;
    import javax.swing.*;
    import bank.BankUI;
    import bank.*;
    public class testing extends JFrame {
       private ObjectOutputStream output;
       private BankUI userInterface;
       private JButton SaveToFile, SaveAs, Exit; //////////savetofile also saves to store need to split up and have 2 buttons
       //private Store store; MIGHT BE SOMETHING TO DO WITH THIS AS I HAD TO COMMENT THIS STORE OUT TO GET IT WORKING AS STORE IS USED BELOW
       private Employee record;
    //////////////////////////////////////from read
    private ObjectInputStream input;
       private JButton nextButton, openButton, nextRecordButton ;
       private Store store = new Store(100);
         private Employee employeeList[] = new Employee[100];
         private int count = 0, next = 0;
    /////////////////////////////////////////////from read end
       // set up GUI
       public testing()
          super( "Employee Data" ); // appears in top of gui
          // create instance of reusable user interface
          userInterface = new BankUI( 9 );  // nine textfields
          getContentPane().add( userInterface, BorderLayout.CENTER );
          // configure button doTask1 for use in this program
          SaveAs = userInterface.getSaveAsButton();
          SaveAs.setText( "Save as.." );
    //////////////////from read
    openButton = userInterface.getOpenFileButton();
          openButton.setText( "Open File" );
    openButton.addActionListener(
             // anonymous inner class to handle openButton event
             new ActionListener() {
                // close file and terminate application
                public void actionPerformed( ActionEvent event )
                   openFile();
             } // end anonymous inner class
          ); // end call to addActionListener
          // register window listener for window closing event
          addWindowListener(
             // anonymous inner class to handle windowClosing event
             new WindowAdapter() {
                // close file and terminate application
                public void windowClosing( WindowEvent event )
                   if ( input != null )
                             closeFile();
                   System.exit( 0 );
             } // end anonymous inner class
          ); // end call to addWindowListener
          // get reference to generic task button doTask2 from BankUI
          nextButton = userInterface.getDoTask2Button();
          nextButton.setText( "Next Record" );
          nextButton.setEnabled( false );
          // register listener to call readRecord when button pressed
          nextButton.addActionListener(
             // anonymous inner class to handle nextRecord event
             new ActionListener() {
                // call readRecord when user clicks nextRecord
                public void actionPerformed( ActionEvent event )
                   readRecord();
             } // end anonymous inner class
          ); // end call to addActionListener
    //get reference to generic task button do Task3 from BankUI
          // get reference to generic task button doTask3 from BankUI
          nextRecordButton = userInterface.getDoTask3Button();
          nextRecordButton.setText( "Get Next Record" );
          nextRecordButton.setEnabled( false );
          // register listener to call readRecord when button pressed
          nextRecordButton.addActionListener(
             // anonymous inner class to handle nextRecord event
             new ActionListener() {
                // call readRecord when user clicks nextRecord
                public void actionPerformed( ActionEvent event )
                   getNextRecord();
             } // end anonymous inner class
          ); // end call to addActionListener
    ///////////from read end
          // register listener to call openFile when button pressed
          SaveAs.addActionListener(
             // anonymous inner class to handle openButton event
             new ActionListener() {
                // call openFile when button pressed
                public void actionPerformed( ActionEvent event )
                   SaveLocation();
             } // end anonymous inner class
          ); // end call to addActionListener
          // configure button doTask2 for use in this program
          SaveToFile = userInterface.getSaveStoreToFileButton();
          SaveToFile.setText( "Save to store and to file need to split this task up" );
          SaveToFile.setEnabled( false );  // disable button
          // register listener to call addRecord when button pressed
          SaveToFile.addActionListener(
             // anonymous inner class to handle enterButton event
             new ActionListener() {
                // call addRecord when button pressed
                public void actionPerformed( ActionEvent event )
                   addRecord(); // NEED TO SPLIT UP SO DONT DO BOTH
             } // end anonymous inner class
          ); // end call to addActionListener
    Exit = userInterface.getExitAndSaveButton();
          Exit.setText( "Exit " );
          Exit.setEnabled( false );  // disable button
          // register listener to call addRecord when button pressed
          Exit.addActionListener(
             // anonymous inner class to handle enterButton event
             new ActionListener() {
                // call addRecord when button pressed
                public void actionPerformed( ActionEvent event )
                   addRecord(); // adds record to to file
                   closeFile(); // closes everything
             } // end anonymous inner class
          ); // end call to addActionListener
          // register window listener to handle window closing event
          addWindowListener(
             // anonymous inner class to handle windowClosing event
             new WindowAdapter() {
                // add current record in GUI to file, then close file
                public void windowClosing( WindowEvent event )
                             if ( output != null )
                                addRecord();
                                  closeFile();
             } // end anonymous inner class
          ); // end call to addWindowListener
          setSize( 600, 500 );
          setVisible( true );
         store = new Store(100);
       } // end CreateSequentialFile constructor
       // allow user to specify file name
    ////////////////from read
      // enable user to select file to open
       private void openFile()
          // display file dialog so user can select file to open
          JFileChooser fileChooser = new JFileChooser();
          fileChooser.setFileSelectionMode( JFileChooser.FILES_ONLY );
          int result = fileChooser.showOpenDialog( this );
          // if user clicked Cancel button on dialog, return
          if ( result == JFileChooser.CANCEL_OPTION )
             return;
          // obtain selected file
          File fileName = fileChooser.getSelectedFile();
          // display error if file name invalid
          if ( fileName == null || fileName.getName().equals( "" ) )
             JOptionPane.showMessageDialog( this, "Invalid File Name",
                "Invalid File Name", JOptionPane.ERROR_MESSAGE );
          else {
             // open file
             try {
                input = new ObjectInputStream(
                   new FileInputStream( fileName ) );
                openButton.setEnabled( false );
                nextButton.setEnabled( true );
             // process exceptions opening file
             catch ( IOException ioException ) {
                JOptionPane.showMessageDialog( this, "Error Opening File",
                   "Error", JOptionPane.ERROR_MESSAGE );
          } // end else
       } // end method openFile
    public void readRecord() // need to merger with next record
          Employee record;
          // input the values from the file
          try {
         record = ( Employee ) input.readObject();
                   employeeList[count++]= record;
                   store.add(record);/////////ADDS record to Store
              store.displayAll();
              System.out.println("Count is " + store.getCount());
             // create array of Strings to display in GUI
             String values[] = {
                        String.valueOf(record.getName()),
                            String.valueOf(record.getGender()),
                        String.valueOf( record.getDateOfBirth()),
                        String.valueOf( record.getID()),
                             String.valueOf( record.getStartDate()),
                        String.valueOf( record.getSalary()),
                        String.valueOf( record.getAddress()),
                           String.valueOf( record.getNatInsNo()),
                        String.valueOf( record.getPhone())
    // i added all those bits above split onto one line to look neater
             // display record contents
             userInterface.setFieldValues( values );
          // display message when end-of-file reached
          catch ( EOFException endOfFileException ) {
             nextButton.setEnabled( false );
          nextRecordButton.setEnabled( true );
             JOptionPane.showMessageDialog( this, "No more records in file",
                "End of File", JOptionPane.ERROR_MESSAGE );
          // display error message if class is not found
          catch ( ClassNotFoundException classNotFoundException ) {
             JOptionPane.showMessageDialog( this, "Unable to create object",
                "Class Not Found", JOptionPane.ERROR_MESSAGE );
          // display error message if cannot read due to problem with file
          catch ( IOException ioException ) {
             JOptionPane.showMessageDialog( this,
                "Error during read from file",
                "Read Error", JOptionPane.ERROR_MESSAGE );
       } // end method readRecord
       private void getNextRecord()
               Employee record = employeeList[next++%count];//cycles throught accounts
          //create aray of string to display in GUI
          String values[] = {String.valueOf(record.getName()),
             String.valueOf(record.getGender()),
              String.valueOf( record.getStartDate() ), String.valueOf( record.getAddress()),
         String.valueOf( record.getNatInsNo()),
         String.valueOf( record.getPhone()),
             String.valueOf( record.getID() ),
               String.valueOf( record.getDateOfBirth() ),
         String.valueOf( record.getSalary() ) };
         //display record contents
         userInterface.setFieldValues(values);
         //display record content
    ///////////////////////////////////from read end
    private void SaveLocation()
          // display file dialog, so user can choose file to open
          JFileChooser fileChooser = new JFileChooser();
          fileChooser.setFileSelectionMode( JFileChooser.FILES_ONLY );
          int result = fileChooser.showSaveDialog( this );
          // if user clicked Cancel button on dialog, return
          if ( result == JFileChooser.CANCEL_OPTION )
             return;
          File fileName = fileChooser.getSelectedFile(); // get selected file
          // display error if invalid
          if ( fileName == null || fileName.getName().equals( "" ) )
             JOptionPane.showMessageDialog( this, "Invalid File Name",
                "Invalid File Name", JOptionPane.ERROR_MESSAGE );
          else {
             // open file
             try {
                output = new ObjectOutputStream(
                   new FileOutputStream( fileName ) );
                SaveAs.setEnabled( false );
                SaveToFile.setEnabled( true );
              Exit.setEnabled( true );
             // process exceptions from opening file
             catch ( IOException ioException ) {
                JOptionPane.showMessageDialog( this, "Error Opening File",
                   "Error", JOptionPane.ERROR_MESSAGE );
          } // end else
       } // end method openFile
       // close file and terminate application
    private void closeFile()
          // close file
          try {
              //you want to cycle through each recordand add them to store here.
                                            int storeSize = store.getCount();
                                            for (int i = 0; i<storeSize;i++)
                                            output.writeObject(store.elementAt(i));
             output.close();
    input.close();// from read!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
             System.exit( 0 );
          // process exceptions from closing file
          catch( IOException ioException ) {
             JOptionPane.showMessageDialog( this, "Error closing file",
                "Error", JOptionPane.ERROR_MESSAGE );
             System.exit( 1 );
       } // end method closeFile
       // add record to file
       public void addRecord()
          int employeeNumber = 0;
          String fieldValues[] = userInterface.getFieldValues();
          // if account field value is not empty
          if ( ! fieldValues[ BankUI.IDNUMBER ].equals( "" ) ) {
             // output values to file
             try {
                employeeNumber = Integer.parseInt(
                   fieldValues[ BankUI.IDNUMBER ] );
                        String dob = fieldValues[ BankUI.DOB ];
                        String[] dateofBirth = dob.split ("-"); // what used to put between number chnage to /
                        String sDate = fieldValues[ BankUI.START ];
                        String[] startDate = sDate.split ("-");
                        String sex = fieldValues[ BankUI.GENDER ];
                        char gender = (sex.charAt(0)); // check if m or f prob check in employee
    if ( employeeNumber >= 0 ) {
                  /* create new record =String name, char gender, Date dob, String add, String nin, String phone, String id, Date start, float salary*/
                    record  = new Employee(
                    fieldValues[ BankUI.NAME ],
                        gender,
                    new Date(     Integer.parseInt(dateofBirth[0]),
                              Integer.parseInt(dateofBirth[1]),
                              Integer.parseInt(dateofBirth[2])),
                        fieldValues[ BankUI.ADDRESS ],
                        fieldValues[ BankUI.NATINTNO ],
                        fieldValues[ BankUI.PHONE ],
                        fieldValues[ BankUI.IDNUMBER ],
              new Date(     Integer.parseInt(startDate[0]),
                              Integer.parseInt(startDate[1]),
                              Integer.parseInt(startDate[2])),
              Float.parseFloat( fieldValues[ BankUI.SALARY ] ));
                        if (!store.isFull())
                             store.add(record);
                        else
                        JOptionPane.showMessageDialog( this, "The Store is full you cannot add\n"+
                         "anymore employees. \nPlease Save Current File and Create a New File." );
                             System.out.println("Store full");
                        store.displayAll();
                        System.out.println("Count is " + store.getCount());
                                  // output record and flush buffer
                                  //output.writeObject( record );
                   output.flush();
                else
                    JOptionPane.showMessageDialog( this,
                       "Account number must be greater than 0",
                       "Bad account number", JOptionPane.ERROR_MESSAGE );
                // clear textfields
                userInterface.clearFields();
             } // end try
             // process invalid account number or balance format
             catch ( NumberFormatException formatException ) {
                JOptionPane.showMessageDialog( this,
                   "Bad ID number, Date or Salary", "Invalid Number Format",
                   JOptionPane.ERROR_MESSAGE );
             // process exceptions from file output
             catch ( ArrayIndexOutOfBoundsException ArrayException ) {
                 JOptionPane.showMessageDialog( this, "Error with Start Date or Date of Birth\nPlease enter like: 01-01-2001",
                    "IO Exception", JOptionPane.ERROR_MESSAGE );
                      // process exceptions from file output
             catch ( IOException ioException ) {
                 JOptionPane.showMessageDialog( this, "Error writing to file",
                    "IO Exception", JOptionPane.ERROR_MESSAGE );
                closeFile();
          } // end if
       } // end method addRecord
       public static void main( String args[] )
          new testing();
    } // end class CreateSequentialFile

    Sure you can read and write at the same time. But normally you would be reading from one place and writing to another place.
    I rather regret avoiding the OP's earlier post asking how to combine two classes. I looked at the two classes posted and realized the best thing to do was actually to break them into more classes. But I also realized it was going to be a big job explaining why and how, and I just don't have the patience for that sort of thing.
    So now we have a Big Ball Of Tar&trade; and I feel partly responsible.

  • Windows 7 install stops at "CD/DVD driver not found"

    When trying to install Windows 7 with boot camp, all goes well until it stops and gives me the "CD/DVD driver not found". I have a mid 2010 Mac Pro with a 500 GB SSD drive. I have burned a DVD at 1X from a iso file.
    Thanks

    Can you verify the MD5/SHA1 of the DVD with the ISO using OS X 'openssl md5' or the M$ FCIV tool? The source of your ISO should be able to provide MD5/SHA1.
    OS X Example
    openssl md5 ~/Desktop/GRMCPRXVOL_EN_DVD.cdr
    MD5(/Users/MyName/Desktop/GRMCPRXVOL_EN_DVD.cdr)= 977f4f0f1400be91855789213e07b031
    Windows FCIV example
    D:\>C:\"Program Files (x86)"\FCIV\fciv
    // File Checksum Integrity Verifier version 2.05.
    Usage:  fciv.exe [Commands] <Options>
    Commands: ( Default -add )
            -add    <file | dir> : Compute hash and send to output (default screen).
                    dir options:
                    -r       : recursive.
                    -type    : ex: -type *.exe.
                    -exc file: list of directories that should not be computed.
                    -wp      : Without full path name. ( Default store full path)
                    -bp      : specify base path to remove from full path name
            -list            : List entries in the database.
            -v               : Verify hashes.
                             : Option: -bp basepath.
            -? -h -help      : Extended Help.
    Options:
            -md5 | -sha1 | -both    : Specify hashtype, default md5.
            -xml db                 : Specify database format and name.
    To display the MD5 hash of a file, type fciv.exe filename
    D:\>C:\"Program Files (x86)"\FCIV\fciv -add Windows8.1-64bit.cdr
    // File Checksum Integrity Verifier version 2.05.
    f104b78019e86e74b149ae5e510f7be9 windows8.1-64bit.cdr

  • External hard drive for my Mac and PC

    I recently bought a 15in MBP. I have been using an external hard drive to store my music using a PC's itunes.  When I connected it to my MBP it didnt recognize it.  So i put everything back on to my PC, and used the now empty hard drive as the device for my Time Machine.  No when I connect the external hard drive to the PC, it doesnt recognize it.  So now I cant transfer my music back onto my external hard drive and use it with my MBP.  I need help!!!
    Will I have to buy an all new external hard drive to put my music on and then use it for my MBP?  Do I have to buy a specific brand and will it have to be formateed for Mac initially.  If so, will it work on a PC?  I just need advice.

    For both the PC and Mac to recognise the hard drive, it has to be formatted as FAT or FAT32. Macs can read, but not write NTFS drives (which is the standard format for PC/Windows Vista/7); Windows can't read or write Mac OS formatted drives (which is what Time Machine requires)
    Empty the drive, and reformat it in Windows as FAT32. There are some limitations: FAT32 won't allow an individual file large than 2GB, which may a problem if you store full movies in iTunes. But it will work.
    If you're not happy to do that, there is software such as Macdrive for Windows that illl allow Win machines to deal with Mac formatted drives. I guess there are equivalents for the Mac to write to NTFS too

  • Moving photos and music from iPod to Mac

    I have a MacBook and my hard drive just died. I was wondering if I could move my photos and music that are currently on my iPod back on to my mac's new hard drive. I had the box in iTunes checked that said store full resoultion photos on iPod after enabling disk use, but I currently don't have disc use enabled. Could I enable it and then sync the photos and music back?

    Connect your iPod to your computer. If it is set to update automatically you'll get a message that it is linked to a different library and asking if you want to link to this one and replace all your songs etc, press "Cancel". Pressing "Erase and Sync" will irretrievably remove all the songs from your iPod. Your iPod should appear in the iTunes source list from where you can change the update setting to manual and use your iPod without the risk of accidentally erasing it. Also when using most of the utilities listed below your iPod needs to be enabled for disc use, changing to manual update will do this by default. Changing to manual will also enable the iPod for disc use: Managing content manually on iPod
    Once you are safely connected there are a few things you can do to restore your iTunes from the iPod. If you have any iTunes Music Store purchases the transfer of purchased content from the iPod to authorised computers was introduced with iTunes 7. A paragraph on it has been added to this article: Transfer iTunes Store purchases using iPod
    The transfer of content from other sources such as songs imported from CD is designed by default to be one way from iTunes to iPod. However there are a number of third party utilities that you can use to retrieve music files and playlists from your iPod. I use Senuti but have a look at the web pages and documentation for the others too, you'll find that they have varying degrees of functionality and some will transfer movies, videos, photos and games as well. This is just a small selection of what's available, you can also read reviews of some of them here: Wired News - Rescue Your Stranded Tunes
    Senuti Mac Only
    PodView Mac Only
    PodWorks Mac Only
    iPodDisk PPC Mac Only (experimental version available for Intel Macs)
    iPodRip Mac & Windows
    YamiPod Mac & Windows
    Music Rescue Mac & Windows
    iPodCopy Mac & Windows
    There's also a manual method of copying songs from your iPod to a Mac or PC. The procedure is a bit involved and won't recover playlists but if you're interested it's available at this link: Two-way Street: Moving Music Off the iPod
    If you have full resolution copies of the photos on the iPod have a look here: Apple Knowledge Base article - Use Disk Mode to copy photos from iPod
    Keep your iPod in manual mode until you have reloaded your iTunes and you are happy with your playlists etc then it will be safe to return it auto-sync. I would also advise that you get yourself an external hard drive and back your stuff up, relying on an iPod as your sole backup is not a good idea and external drives are comparatively inexpensive these days, you can get loads of storage for a reasonable outlay.

Maybe you are looking for

  • Error while connecting through DB adapter in ESB

    ESB is getting data from siebel , but while sending to database through DB adapter , it gets the following error An unhandled exception has been thrown in the ESB system. The exception reported is: "org.collaxa.thirdparty.apache.wsif.WSIFException: e

  • Error in sync Manager since installing B1 PL19

    Hi, since I installed Pl19 for B1, i get this problem in Syncmanager when the application loads, I get a popup saying "The procedure entry point MiniDumpWriteDump could not be located in the dynamic link library dbghelp.dll" also in the log window of

  • Mapping issue in 10gr2

    Hi All, We are using OWB 10gR2 and database 10.2.0.2 . We have a mapping loading data into fact. We use sysdate to update the "inserted_date" and "last_updated" columns. Mapping is inserting corrupt data into these date fields. Really don't know why?

  • Problems connection N96 to WLAN

    Hi everybody, I have some problemes connecting my N96 to internet using WLAN, i created an AD HOC network with no encryption and i shared my DSL connection, everything worked fine by after 1 day it stopped, i had "no gateway reply" error message, so

  • IPhot 4.0.3 will not completely open

    When I start iPhoto it gets hung up on "Loading Photos" with the spinning wheel of death. Any suggestions as to what I can do. i have a lot of photos that can not be accessed.