Getting datastore id

I wonder if there is a way to get datastore id in Java code.
I actually would like to benefit from id generation at database level.
If I map the datastore id to a field in the Java class, I get an error
when creating object reporting two values not in synch; which is
understandable.
Is there a way to fetch the datastore id (other than using direct JDBC),
or to declare a field as read-only ?
Regards,
J-F

J-F Daune wrote:
I wonder if there is a way to get datastore id in Java code.
I actually would like to benefit from id generation at database level.
If I map the datastore id to a field in the Java class, I get an error
when creating object reporting two values not in synch; which is
understandable.
Is there a way to fetch the datastore id (other than using direct JDBC),
or to declare a field as read-only ?
Regards,
J-F
Just what I did some days ago ...
If you use application identity you nevertheless can leave the task of
generating an id to kodo.
Chapter "5.2.4.3. Sequence-Assigned" of the manual is what solved my problem.
If you want to use datastore-identity and just sometimes want to know the
primary-key value, you can cast the ObjectId (JDOHelper.getObjectId) to
kodo.util.Id ... this one allows you to access the primary key as well.

Similar Messages

  • Rename Datastores

    Hello all.  I have a simple script to rename datastores.  Input is a list of Datastore names.  New name is current datastore name prepended by "DoNotUse."  However it is very slow.
    Any suggestions on speeding it up?  Thank you very much
    $DSList = Get-Content -Path c:\temp\DSlist.txt
    Foreach ($DS in $DSList){
         $NewDS = "DoNotUse_$DS"
         Get-Datastore -Name $DS | Set-Datastore -Name $NewDS

    See if the following is faster
    $dsList = Get-Content -Path c:\temp\DSlist.txt
    $filter = $dsList -join '|'
    Get-View -ViewType Datastore -Property Name -Filter @{'Name'=$filter} | %{
      $_.Rename("DoNotUse_$($_.Name)")

  • Datastore Cluster & Host Report

    Hi,
    Having run a PowerCLI script to determine how many datastores are 90% full or more.  I now need to provide the Storage team with a list of Datastore Clusters, containing ESXi hosts and datastores in order for them to provision additional LUNS (it would appear, what they see SAN array side, defers from what is seen vSphere side?)
    Could somone kindly provide a ready made script for this? 
    The below script only provides cluster details, whereas the requirement is for Datastore Cluster, Host and Datastore details?
    foreach ($Cluster in (Get-Cluster)) {
    foreach ($VMHost in (Get-VMHost -Location $Cluster)) {
    $VMHost | Get-Datastore |
    Select-Object -Property @{Name="Cluster";Expression={$Cluster.Name}},
    ="VMHost";Expression={$VMHost.Name}},
    ="Datastore";Expression={$_.Name}}
    Export-Csv "d:\Scripts\Report1.csv" -NoTypeInformation -UseCulture
    Thanks for all your efforts in advance.

    The ForEach statement doesn't place anything on the pipeline.
    You can bypass that by using the Call operator (&).
    Something like this
    &{foreach($dsc in Get-DatastoreCluster){
      foreach($ds in (Get-Datastore -RelatedObject $dsc)){
        Get-VMHost -Datastore $ds |
        Select -Property @{N='VMHost';E={$_.Name}},
          @{N='DatastoreCluster';E={$dsc.Name}},
          @{N='Datastore';E={$ds.Name}}
    }} | Export-Csv "d:\Scripts\Report1.csv" -NoTypeInformation -UseCulture

  • How to export data to multiple csv files?

    Hey Scripting Guys,
    As stated by the name I'm a novice at scripting.  Typically I'm able to resolve most of my scripting challenges by reading through your site or scouring the internet.  This challenge I haven't been able to resolve.  Please help!!
    I'm running a script (posted below) to grab data and export it to a csv file.  My challenge is that I want to run the script daily via task manager and have it create a new csv file either daily or weekly.  I'm having trouble with the scripting
    creating a new csv file.  How do I resolve this?  
    It would be beneficial to append the date to a standard file name, ex. c:\exportedcsv7-11-2014.csv; the next day it would be c:\exportedcsv7-12-2014.csv; and so on.
    Thank you in advance to any assistance.
    Respectfully,
    ScriptingNovice
    Get-Datastore -Name "*DS*" | Sort $_.name | Get-View | Select -ExpandProperty Summary | `
    Select Name,
    @{N=”FreeSpaceGB”;E={[Math]::Round($_.FreeSpace/1GB,2)}},
    @{N=”CapacityGB”; E={[Math]::Round($_.Capacity/1GB,2)}},
    @{N=”UncommittedGB”; E={[Math]::Round($_.Uncommitted/1GB,2)}},
    @{N=”ProvisionedGB”;E={[Math]::Round(($_.Capacity – $_.FreeSpace + $_.Uncommitted)/1GB,2)}},
    @{N=”Over-Provisioned-DS”;E={([Math]::Round($_.Capacity/1GB,2)) – ([Math]::Round(($_.Capacity – $_.FreeSpace + $_.Uncommitted)/1GB,2))}}| `
    select Name,CapacityGB,FreespaceGB,ProvisionedGB,UncommittedGB,Over-Provisioned-DS | export-csv -notype c:\vmds.csv

    Thank you for the information and the tip.  I took your advice and did some research on Strings, something that I do have trouble grasping.  I'm familiar with variables already and feel comfortable using them.  I also need to study .Net which
    I totally don't understand. 
    Since I appreciate your advice and guidance I'd like to know if I'm on the correct track.  Please look at my breakdown to see if I'm explaining it correctly.
    $d=Get-Date
    Here a variable is being created using the Get-Date cmdlet, if we execute $d the date will appear
    $d.ToString('dd-MM-yyyy')
    This converts the date into the format dd-MM-yyyy.
    I took the information and tips you provided then came up with this after reading about strings.
    export-csv -notype "c:\folder\vmds_$($d.ToString('MM-dd-yyyy')).csv"
    The double quotes evaluate the variables.  The single quotes do not evaluate anything they just show what's inside of them (a.k.a. literal string). The $() evaluate the expression in ('MM-dd-yyyy') before writing it.
    You are correct I definitely need to strengthen the foundation of my basics.  Thank you for the direction and advice.

  • Need to reconfigure the VM after vm deployment

    Hi All,
    I am having a script to deploy a VM from an CSV file . I need to change the vm ( CPU & Memory)  configuration after vm deployment. where the name of the vm should be taken from csv.So any help is much appreciated.
    Attaching the script used . ( downloaded and works fine)
    $vms = Import-CSV D:\vm-deploy\vm-deploy.csv
    foreach ($vm in $vms){
          $Template = Get-Template $vm.template
          $VMHost = Get-VMHost $vm.host
          $Datastore = Get-Datastore $vm.datastore
          $OSCustomization = Get-OSCustomizationSpec $vm.customization
          New-VM -Name $vm.name -OSCustomizationSpec $OSCustomization -Template $Template -VMHost $VMHost -Datastore $Datastore -RunAsync
    ## This cmd need to execute after the vm deployment completed. The VM name should come from the 1 colum of CSV file
    Set-VM $vmname -NumCpu 2 -MemoryGB 5 -Confirm:$false
    Write-Host "All vms deployed, " -ForegroundColor Green
    Disconnect-VIServer -Confirm:$false
    CSV file looks like this
    Name,Template,host,datastore,customization
    Test-VM,vm-dep-test,ESX02,P5-vm-stage,vm-deploy
    Thanks in advance.....

    A simpler, but not optimal, solution is to use the Wait-Task cmdlet.
    Something like this
    $vms = Import-CSV D:\vm-deploy\vm-deploy.csv
    $tasks = @()
    foreach ($vm in $vms){
          $Template = Get-Template $vm.template
          $VMHost = Get-VMHost $vm.host
          $Datastore = Get-Datastore $vm.datastore
          $OSCustomization = Get-OSCustomizationSpec $vm.customization
          $tasks += New-VM -Name $vm.name -OSCustomizationSpec $OSCustomization -Template $Template -VMHost $VMHost `
              -Datastore $Datastore -RunAsync
    Wait-Task -Task $tasks
    foreach($vm in $vms){
      Set-VM $vm.name -NumCpu 2 -MemoryGB 5 -Confirm:$false

  • New-harddisk commandlet

    After reviewing VMwares documentation on this cmdlet I have attempted to add harddrives to a VM using the -controller and -datastore parameters.
    What i would like to do is specify the following
    new-harddrive -vm $Vm -capacity ($size*1mb) -storageformat "Thick" -controller '0,10' -datastore '[san-data-050]'
    This obviously did not work because the controller object and datastore do not accept those arguments. After doing some more research i attempted the solutions below:
    Problem 1:
    I understand that -controller only accepts the bas controller and i have written a function that pulls the base controller object from the vm using:
    $scsictrl = get-scsicontroller -vm $vm | %{$_.name -eq "SCSI controlelr $SCSI"}
    Now that thats solved, i still cannot specify the scsi id or port i wish to.
    Problem 2:
    After that problem could not be solved, i attempted to solve the $datastore issue. I attempted to:
    $datastore = 'san-data-050'
    get-datastore | Where-object {$_.name -eq $datastore}
    Which yields the datastore object and insert it intot he code without the controller parameter present.
    This also did not work.
    Before anyone comments about Onyx, ive already gotten several scripts and attmpted several times to get that method to work as well. It launches a task but the task fails once it hits VCenter and i cannot identify why, but regardless, it is overly complicated for what should be a straight forward single-line solution.
    Does anyone know how i might solve the above issues?

    PowerCLI Version
       VMware vSphere PowerCLI 5.0.1 build 581491
    Snapin Versions
       VMware AutoDeploy PowerCLI Component 5.0 build 544967
       VMware ImageBuilder PowerCLI Component 5.0 build 544967
       VMware License PowerCLI Component 5.0 build 544881
       VMware vSphere PowerCLI Component 5.0 build 581435
    Just tried the onyx version:
    function add-VMhdd{
      <#
      .SYNOPSIS
      This function logs whatever is passed to C:\temp\nm-setup.txt
      .DESCRIPTION
        Function assigns date, uses logpathlocal variable to determine location and encodes everything to ASCII.
      .EXAMPLE
      logme "Logging this text here"
    .PARAMETER $Datastore
    Specify the datastore in the following format [san-data-n080]
      #>
        param (
    [Parameter(Mandatory=$True,Position=0)]
            [ValidateNotNullOrEmpty()]
            [String]$VMname,
    [Parameter(Mandatory=$True,Position=1)]
            [ValidateNotNullOrEmpty()]
            [String]$datastore,
        [Parameter(Mandatory=$True,Position=2)]
            [ValidateNotNullOrEmpty()]
            [String]$sizeinKB,
    [Parameter(Mandatory=$True,Position=3)]
            [ValidateNotNullOrEmpty()]
            [String]$scsiAdaptor,
    [Parameter(Mandatory=$True,Position=4)]
            [ValidateNotNullOrEmpty()]
            [String]$scsiNumber
    $spec = New-Object VMware.Vim.VirtualMachineConfigSpec
    $spec.deviceChange = New-Object VMware.Vim.VirtualDeviceConfigSpec[] (1)
    $spec.deviceChange[0] = New-Object VMware.Vim.VirtualDeviceConfigSpec
    $spec.deviceChange[0].operation = "add"
    $spec.deviceChange[0].fileOperation = "create"
    $spec.deviceChange[0].device = New-Object VMware.Vim.VirtualDisk
    $spec.deviceChange[0].device.key = -100
    #specify datastore and persistance
    $spec.deviceChange[0].device.backing = New-Object VMware.Vim.VirtualDiskFlatVer2BackingInfo
    $spec.deviceChange[0].device.backing.fileName = "$datastore"
    $spec.deviceChange[0].device.backing.diskMode = "persistent"
    $spec.deviceChange[0].device.backing.split = $false
    $spec.deviceChange[0].device.backing.writeThrough = $false
    $spec.deviceChange[0].device.backing.thinProvisioned = $false
    $spec.deviceChange[0].device.backing.eagerlyScrub = $false
    #setdisk properties
    $spec.deviceChange[0].device.connectable = New-Object VMware.Vim.VirtualDeviceConnectInfo
    $spec.deviceChange[0].device.connectable.startConnected = $true
    $spec.deviceChange[0].device.connectable.allowGuestControl = $false
    $spec.deviceChange[0].device.connectable.connected = $true
    #Figure out how to specify controller and units
    $adaptor = get-Vmscsicontroller $VMname $scsiAdaptor
    $scsiControlerNumber = $adaptor.id.split("/")[1]
    $spec.deviceChange[0].device.controllerKey = $scsiControlerNumber
    $spec.deviceChange[0].device.unitNumber = $scsiNumber
    $spec.deviceChange[0].device.capacityInKB = $sizeinKB
    #return the UID of the vm and place in view
    $VMID = (get-vm $VMname).id
    $_this = Get-View -Id "$VMID"
    $_this.ReconfigVM_Task($spec)
    PS C:\users\ron5667\desktop\ps> add-vmhdd "ntaddh0075m00" '[san-and-n080]' '1048576' '0' '11'Type                                                                                                      Value                                                                                                   
    Task                                                                                                      task-719119
    The task works; however, it shows invalid conifguration in VCENTER.

  • How to sync respositories when Target datastore get changed.

    Hi,
    How to sync the repositories when target data-store get changed. ?
    Means, Suppose my target table say TRG_SALES having 6 columns, but later I alter the target table and added two more columns in it. Then how would I re-sync with repository?
    Can anybody help me ?
    Thanks in advance.
    -Shrinivas
    Edited by: 878809 on Aug 11, 2011 3:45 AM
    Edited by: 878809 on Aug 11, 2011 3:46 AM

    Hi,
    By reverse engineering the target table in the ODI model. If you use 'Selective Reverse', then you can select the tables that you require. If it is only one table, then you can use the 'Object Mask' field.
    Cheers
    Bos

  • Error while activating data loaded into DataStore Object in BI 7.0

    Hi Guys,
    I am facing the following problem :
    When I load data into a Datastore Object, all the records get loaded but the job fails during activation of the data.
    Below is the job log :
    Activation is running: Data target 0RPM_DS07, from 77 to 77
    Overlapping check with archived data areas for InfoProvider 0RPM_DS07
    Check not necessary, as no data has been archived for 0RPM_DS07
    Data to be activated successfully checked against archiving objects
    SQL-END: 14.06.2007 12:33:25 00:00:00
    SQL Error: ORA-20000: Insufficient privileges
    Parallel processes (for Activation); 000003
    Timeout for parallel process (for Activation): 000300
    Package size (for Activation): 020000
    Task handling (for Activation): Backgr Process
    Server group (for Activation): No Server Group Configured
    All data fields updated in mode "overwrite"
    Resource error. No batch process available. Process terminated
    Time limit exceeded. No return of the split processes
    Resource error. No batch process available. Process terminated
    Request you to kindly help me resolve this issue.
    I am runnning the infopackage manually. Should I run it by a process chain??
    If I run by process chain, I get the following error in 1st step:
    You do not have authorization for InfoSource 0RPM_ITEM_FIN_PLANNING.
    Awaiting your replies,
    Thanks,
    punkuj...

    Hi,
    I think this problem remains unanswered.
    The issue behind this problem is, during parellel activation the child jobs acknowledge the parent job about status. If the child job takes long time to read data from active data table, then it times out and fails.
    Check the primary index on Active data table and it should be missing in your case. See the se11 index or db02 missing indexes. That is the reason that causes time out. Rebuild the primary index by asking your basis folks and repeat the activation. It should succeed.
    Thanks,
    Sri.

  • When I try compose an email, just get compose window icon stuck in windows taskbar can't actually enlarge it and use it

    Ok, details of installation below, so when I try compose an email, I just get a compose window icon stuck in the windows
    taskbar can't actually enlarge it and use it. If I restart in safe mode with all addons disabled, then it works fine. But
    If I restart normally and manually disable addons/plugins, then close and start normally again i.e not safe mode, it
    breaks, so does not seem to be an addon or plugin but rather something with the configuration.
    Application Basics
    Name Thunderbird
    Version 31.6.0
    User Agent Mozilla/5.0 (Windows NT 6.1; rv:31.0) Gecko/20100101 Thunderbird/31.6.0
    Profile Folder
    Show Folder
    (Local drive)
    Application Build ID 20150330093429
    Enabled Plugins about:plugins
    Build Configuration about:buildconfig
    Memory Use about:memory
    Mail and News Accounts
    ID Incoming server Outgoing servers
    Name Connection security Authentication method Name Connection security Authentication method Default?
    account2 (none) Local Folders plain passwordCleartext
    account3 (nntp) news.mozilla.org:119 plain passwordCleartext stbeehive.oracle.com:465 SSL passwordCleartext true
    account5 (imap) stbeehive.oracle.com:993 SSL passwordCleartext stbeehive.oracle.com:465 SSL passwordCleartext true
    Crash Reports
    Report ID Submitted
    bp-0a8986d2-ff0c-41c3-9da6-e770e2141224 24/12/2014
    bp-01f44ba7-3143-4452-ac98-981b62140123 23/01/2014
    Extensions
    Name Version Enabled ID
    British English Dictionary 1.19.1 false [email protected]
    Lightning 3.3.3 false {e2fda1a4-762b-4020-b5ad-a41df1933103}
    Oracle Beehive Extensions for Thunderbird (OracleInternal) 1.0.0.5 false [email protected]
    Important Modified Preferences
    Name Value
    accessibility.typeaheadfind.flashBar 0
    browser.cache.disk.capacity 358400
    browser.cache.disk.smart_size_cached_value 358400
    browser.cache.disk.smart_size.first_run false
    browser.cache.disk.smart_size.use_old_max false
    extensions.lastAppVersion 31.6.0
    font.internaluseonly.changed false
    font.name.monospace.el Consolas
    font.name.monospace.tr Consolas
    font.name.monospace.x-baltic Consolas
    font.name.monospace.x-central-euro Consolas
    font.name.monospace.x-cyrillic Consolas
    font.name.monospace.x-unicode Consolas
    font.name.monospace.x-western Consolas
    font.name.sans-serif.el Calibri
    font.name.sans-serif.tr Calibri
    font.name.sans-serif.x-baltic Calibri
    font.name.sans-serif.x-central-euro Calibri
    font.name.sans-serif.x-cyrillic Calibri
    font.name.sans-serif.x-unicode Calibri
    font.name.sans-serif.x-western Calibri
    font.name.serif.el Cambria
    font.name.serif.tr Cambria
    font.name.serif.x-baltic Cambria
    font.name.serif.x-central-euro Cambria
    font.name.serif.x-cyrillic Cambria
    font.name.serif.x-unicode Cambria
    font.name.serif.x-western Cambria
    font.size.fixed.el 14
    font.size.fixed.tr 14
    font.size.fixed.x-baltic 14
    font.size.fixed.x-central-euro 14
    font.size.fixed.x-cyrillic 14
    font.size.fixed.x-unicode 14
    font.size.fixed.x-western 14
    font.size.variable.el 17
    font.size.variable.tr 17
    font.size.variable.x-baltic 17
    font.size.variable.x-central-euro 17
    font.size.variable.x-cyrillic 17
    font.size.variable.x-unicode 17
    font.size.variable.x-western 17
    gfx.blacklist.suggested-driver-version 257.21
    mail.openMessageBehavior.version 1
    mail.winsearch.firstRunDone true
    mailnews.database.global.datastore.id 8d997817-eec1-4f16-aa36-008d5baeb30
    mailnews.database.global.indexer.enabled false
    network.cookie.prefsMigrated true
    network.tcp.sendbuffer 65536
    places.database.lastMaintenance 1429004341
    places.history.expiration.transient_current_max_pages 78789
    plugin.disable_full_page_plugin_for_types application/pdf
    plugin.importedState true
    plugin.state.flash 0
    plugin.state.java 0
    plugin.state.np32dsw 0
    plugin.state.npatgpc 0
    plugin.state.npctrl 0
    plugin.state.npdeployjava 0
    plugin.state.npfoxitreaderplugin 0
    plugin.state.npgeplugin 0
    plugin.state.npgoogleupdate 0
    plugin.state.npitunes 0
    plugin.state.npoff 0
    plugin.state.npqtplugin 0
    plugin.state.nprlsecurepluginlayer 0
    plugin.state.npunity3d 0
    plugin.state.npwatweb 0
    plugin.state.npwlpg 0
    plugins.update.notifyUser true
    Graphics
    Adapter Description NVIDIA Quadro FX 580
    Vendor ID 0x10de
    Device ID 0x0659
    Adapter RAM 512
    Adapter Drivers nvd3dum nvwgf2um,nvwgf2um
    Driver Version 8.15.11.9038
    Driver Date 7-14-2009
    Direct2D Enabled Blocked for your graphics driver version. Try updating your graphics driver to version 257.21 or newer.
    DirectWrite Enabled false (6.2.9200.16571)
    ClearType Parameters Gamma: 2200 Pixel Structure: R
    WebGL Renderer Blocked for your graphics driver version. Try updating your graphics driver to version 257.21 or newer.
    GPU Accelerated Windows 0. Blocked for your graphics driver version. Try updating your graphics driver to version 257.21 or newer.
    AzureCanvasBackend skia
    AzureSkiaAccelerated 0
    AzureFallbackCanvasBackend cairo
    AzureContentBackend cairo
    JavaScript
    Incremental GC 1
    Accessibility
    Activated 0
    Prevent Accessibility 0
    Library Versions
    Expected minimum version Version in use
    NSPR 4.10.6 4.10.6
    NSS 3.16.2.3 Basic ECC 3.16.2.3 Basic ECC
    NSS Util 3.16.2.3 3.16.2.3
    NSS SSL 3.16.2.3 Basic ECC 3.16.2.3 Basic ECC
    NSS S/MIME 3.16.2.3 Basic ECC 3.16.2.3 Basic ECC

    Noticed this in the info supplied:
    Graphics Adapter Description NVIDIA Quadro FX 580
    Vendor ID 0x10de
    Device ID 0x0659
    Adapter RAM 512
    Adapter Drivers nvd3dum nvwgf2um,nvwgf2um
    Driver Version 8.15.11.9038
    Driver Date 7-14-2009
    Direct2D Enabled Blocked for your graphics driver version.
    Try updating your graphics driver to version 257.21 or newer.
    Could you update your graphics driver and retest.

  • BODS : Datastore options for SAP R/3 - need clarity for use

    Hi All.
    Another request to understand the datastore optiono n BOXI 3.1 BODS existing installation.
    We are trying to pull a new table from SAP R/3 into BODS and we find that ABAP program is not getting generated as expected. And terminates. When we tested a simple workflow.
    While creating a new datastore for SAP R/3 Source :
    When we look at the datastore we find the following options for R3 Source ;
    ABAP Execution option :
    Execute preloaded
    Generate and execute
    Under data transfer method we find ;
    Shared directory
    Direct download
    FTP
    Custom transfer
    Then we have working directory on sap server
    local directory
    generated ABAP Directory
    Am testing a simple workflow of pulling data from SAP R/3 in a dev machine.
    But am not understanding.
    which option of ABAP Execution, and data transfer , path would work in co-ordination.
    Because when i say direct download and say generate and execute it throws error.
    Can anyone help me with combinations of the options to choose for a R/3 source. And the implications thereof.
    I had created a folder under local bods server D:\Bodi. And given the path for data transfer.
    But the files are not getting generated for whatsoever reasons.
    Any advise on this would be helpful.
    Also found a bit unusual, that there was no button to test the connection to be correct or not, a TEST connection button is not there. Which i felt, could be included.
    Note : on the existing production system, we have chosen execute preloaded; shared directory on sap server. and shared that folder path for the user and given full rights. But while we try to do the same on the dev machine a test before transporting on production, a simple workflow does not work.
    would like to know what settings on sap server, really affect the data store options on the BODS
    thanks
    indu
    Edited by: Indumathy Narayanan on Jul 19, 2011 4:14 PM

    Indeed, BODS <> SAP connectivity can be tricky.
    For a development environment, I suggest you select the option "Generate and execute" for your "ABAP Execution Option." What this means is that DS will create, on the fly, small-ish ABAP programs. These ABAP programs will be written, in plain text, to a local directory on the DS job server, in the folder specified in "Generated ABAP Directory".  You can see them in there after an attempted job execution, assuming the job involves the creation of an ABAP program.  The ABAP program name is specified in the properties of an ABAP data flow, under Options > ABAP Program Name.  If you can't, perhaps the DS job server process doesn't have full rights to that folder - ?  After being generated on-the-fly, they'll be transferred to SAP to execute.  The SAP user you use to connect to SAP must have sufficient rights to upload-and-execute these ABAP programs, and that's a fairly substantial set of rights. What's required is documented in the BODS supplement for SAP. Often, to get things running, your friendly local Basis admin will grant SAP_ALL to the DS user, to see what rights are being invoked.
    Once all that jazz is working, you need to get the data back. There are a number of ways to do this. The method of data transfer is specified in "Data transfer method," where, ignoring "Custom transfer," you have three choices:
    1) Direct Download: easiest and slowest.  This method tells SAP to attempt to stick data in the client-side folder specified in "Local directory." Try this first.
    2) Shared folder: This is recommended when you have SAP being hosted on a Windows box. Basically: you set "Working directory on SAP Server" and "Data Services path to the shared directory" to point to the same folder.  SAP uses the "Working directory on SAP Server" to find this folder, and DS uses the other setting. So, for instance, if you were going to use the Shared folder method, you could set "Working directory on SAP Server" to "E:\BODS_Transfer", and, assuming E:\BODS_Transfer was shared-out as "BODS_Transfer", you could set "Data Services path to the shared directory" to
    dev12.somecompany.com\BODS_Transfer .  Then, you'd need to setup all the relevant security, as both SAP and DS need rights to read and write files in this folder.
    3) FTP (this is the method I usually use): SAP writes the "transport files" you're after (i.e., the data) in the folder specified in "Working directory on SAP Server". Then, you need to establish ftp connectivity to that folder from the DS job server's perspective, which you do by entering the ftp host name and the path to that folder in "FTP host name" and "FTP relative path to the SAP working directory".  In my opinion, the "relative" business is a little confusing, and I just typically enter the full ftp path, beginning the path w/ a forward slash, like "/usr/sap/tmp/BOBJ" or something like that.  You also need to obtain a separate username and password for the ftp connectivity. Note that this name and password has NOTHING to do with the SAP username and password; you're just setting-up DS to act as an ftp client. I strongly encourage you to test ftp connectivity by using a regular ftp client from the DS job server and attempt to connect to your ftp host using the username and password you were given, and attempting to fetch some sample test file. If you can't do this, manually, then DS won't be able to do it, either.
    Best wishes,
    Jeff Prenevost

  • How i can get only required data from table by using CKM

    Hi...
    i have done one scenerio i.e get required data from one table into another table on basis of some condition by using CKM.
    Now i want same,but this time my target is file ,not RDBMS.
    so plz tell me procedure how i can get required data from source table into File on basis of some condition using CKM.
    thanks

    CKM checks for Constraints and Not Null condition. You can use IKM SQL to File and being that there is no option in IKM for CKM .
    The other method is you can declare those Not null and other condition as Filter in the Source Datastore, that should filter out records that you are not willing to move to files.

  • Cannot send or receive email using Thunderbird, but can get it just fine on Verizon server.

    Every time Mozilla/Thunderbird does a major upgrade I have a problem with sending and receiving my email. When trying to send an email, I get the message:
    "An error occurred while sending mail. The mail server responded: 5.7.1 Missing or literal domains not allowed. Please verify that your email address is correct in your Mail preferences and try again."
    Using my regular email password, I am able to get my email from the Verizon server just fine. It simply will not work with Thunderbird. Every time there is a major upgrade I spend hours and hours with Verizon techs with them trying to figure out what is wrong. Yesterday was one such day and we looked at EVERYTHING and no reason could be found for the problem. The tech concluded that the problem had to be with Thunderbird and that I should re-enstall it!!!! I don't have time to reinstall it! Somehow it has been fixed before but we don't even know how.
    Please HELP!!

    Application Basics
    Name: Thunderbird
    Version: 31.5.0
    User Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.5.0
    Profile Folder: Show Folder
    (Local drive)
    Application Build ID: 20150222233048
    Enabled Plugins: about:plugins
    Build Configuration: about:buildconfig
    Memory Use: about:memory
    Mail and News Accounts
    account1:
    INCOMING: account1, , (pop3) pop.verizon.net:995, SSL, passwordCleartext
    OUTGOING: smtp.verizon.net:465, SSL, passwordCleartext, true
    account2:
    INCOMING: account2, , (none) Local Folders, plain, passwordCleartext
    Crash Reports
    http://crash-stats.mozilla.com/report/index/bp-5d08a162-d682-42e4-b002-98dfb2140903 (9/2/2014)
    http://crash-stats.mozilla.com/report/index/bp-06b23374-e88f-4d7c-9aab-9d4dd2131129 (11/29/2013)
    Extensions
    Important Modified Preferences
    Name: Value
    accessibility.typeaheadfind.flashBar: 0
    browser.cache.disk.capacity: 512000
    browser.cache.disk.smart_size_cached_value: 358400
    browser.cache.disk.smart_size.first_run: false
    browser.cache.disk.smart_size.use_old_max: false
    extensions.lastAppVersion: 31.5.0
    font.internaluseonly.changed: true
    font.name.monospace.el: Consolas
    font.name.monospace.tr: Consolas
    font.name.monospace.x-baltic: Consolas
    font.name.monospace.x-central-euro: Consolas
    font.name.monospace.x-cyrillic: Consolas
    font.name.monospace.x-unicode: Consolas
    font.name.monospace.x-western: Consolas
    font.name.sans-serif.el: Calibri
    font.name.sans-serif.tr: Calibri
    font.name.sans-serif.x-baltic: Calibri
    font.name.sans-serif.x-central-euro: Calibri
    font.name.sans-serif.x-cyrillic: Calibri
    font.name.sans-serif.x-unicode: Calibri
    font.name.serif.el: Cambria
    font.name.serif.tr: Cambria
    font.name.serif.x-baltic: Cambria
    font.name.serif.x-central-euro: Cambria
    font.name.serif.x-cyrillic: Cambria
    font.name.serif.x-unicode: Cambria
    font.name.serif.x-western: Cambria
    font.size.fixed.el: 14
    font.size.fixed.tr: 14
    font.size.fixed.x-baltic: 14
    font.size.fixed.x-central-euro: 14
    font.size.fixed.x-cyrillic: 14
    font.size.fixed.x-unicode: 14
    font.size.fixed.x-western: 14
    font.size.variable.el: 17
    font.size.variable.tr: 17
    font.size.variable.x-baltic: 17
    font.size.variable.x-central-euro: 17
    font.size.variable.x-cyrillic: 17
    font.size.variable.x-unicode: 17
    font.size.variable.x-western: 14
    gfx.direct3d.last_used_feature_level_idx: 0
    mail.openMessageBehavior.version: 1
    mail.winsearch.enable: true
    mail.winsearch.firstRunDone: true
    mail.winsearch.global_reindex_time: 1366427346
    mailnews.database.global.datastore.id: 3d7edf91-47a7-443c-aca2-98ba6dddeb0
    mailnews.database.global.views.conversation.columns: {"threadCol":{"visible":true,"ordinal":"9"},"flaggedCol":{"visible":true,"ordinal":"1"},"attachmentCol":{"visible":false…
    network.cookie.cookieBehavior: 1
    network.cookie.lifetimePolicy: 2
    network.cookie.prefsMigrated: true
    places.database.lastMaintenance: 1424643534
    places.history.expiration.transient_current_max_pages: 104858
    plugin.importedState: true
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_bgcolor: false
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_bgimages: false
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_colorspace:
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_command:
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_downloadfonts: false
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_duplex: -882281128
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_edge_bottom: 0
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_edge_left: 0
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_edge_right: 0
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_edge_top: 0
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_evenpages: true
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_footercenter:
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_footerleft: &PT
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_footerright: &D
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_headercenter:
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_headerleft: &T
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_headerright: &U
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_in_color: true
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_margin_bottom: 0.5
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_margin_left: 0.5
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_margin_right: 0.5
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_margin_top: 0.5
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_oddpages: true
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_orientation: 0
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_page_delay: 50
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_paper_data: 1
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_paper_height: 11.00
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_paper_name:
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_paper_size_type: 0
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_paper_size_unit: 0
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_paper_width: 8.50
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_plex_name:
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_resolution: 91081952
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_resolution_name:
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_reversed: false
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_scaling: 0.90
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_shrink_to_fit: false
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_to_file: false
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_unwriteable_margin_bottom: 0
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_unwriteable_margin_left: 0
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_unwriteable_margin_right: 0
    print.printer_HP_Officejet_Pro_8500_A909g_Series.print_unwriteable_margin_top: 0
    Graphics
    Adapter Description: ATI Radeon HD 4200
    Vendor ID: 0x1002
    Device ID: 0x9710
    Adapter RAM: 256
    Adapter Drivers: aticfx64 aticfx64 aticfx32 aticfx32 atiumd64 atidxx64 atiumdag atidxx32 atiumdva atiumd6a atitmm64
    Driver Version: 8.862.3.0
    Driver Date: 6-29-2011
    Direct2D Enabled: true
    DirectWrite Enabled: true (6.2.9200.16571)
    ClearType Parameters: ClearType parameters not found
    WebGL Renderer: false
    GPU Accelerated Windows: 2/2 Direct3D 10
    AzureCanvasBackend: direct2d
    AzureSkiaAccelerated: 0
    AzureFallbackCanvasBackend: cairo
    AzureContentBackend: direct2d
    JavaScript
    Incremental GC: 1
    Accessibility
    Activated: 0
    Prevent Accessibility: 0
    Library Versions
    Expected minimum version
    Version in use
    NSPR
    4.10.6
    4.10.6
    NSS
    3.16.2.3 Basic ECC
    3.16.2.3 Basic ECC
    NSS Util
    3.16.2.3
    3.16.2.3
    NSS SSL
    3.16.2.3 Basic ECC
    3.16.2.3 Basic ECC
    NSS S/MIME
    3.16.2.3 Basic ECC
    3.16.2.3 Basic ECC

  • Portal events are not getting loaded into the Analytics database tables

    Analytics database ASFACT tables (ASFACT_PAGEVIEWS,ASFACT_PORLETVIEW) are not getting populated with data.
    Possible diagnosis/workarounds tried:
    -Checked the analytics configuration in configuration manager, Enable Analytics Communication option checked
    -Registered Portal Events during analytics installation
    -Verified that UDP events are sent out from the portal: Test: OK
    -Reinstalled Interaction analytics component
    Any inputs highly appreciated.
    Cheers,
    Sandeep
    In collector.log, found the exception:
    08 Jul 2010 07:12:54,613 ERROR PageViewHandler - could not retrieve user: com.plumtree.analytics.collector.exception.DimensionManagerException: Could not insert dimension in the database
    com.plumtree.analytics.collector.exception.DimensionManagerException: Could not insert dimension in the database
    at com.plumtree.analytics.collector.cache.DimensionManager.insertDB(DimensionManager.java:271)
    at com.plumtree.analytics.collector.cache.DimensionManager.manageDBImage(DimensionManager.java:139)
    at com.plumtree.analytics.collector.cache.DimensionManager.handleNewDimension(DimensionManager.java:85)
    at com.plumtree.analytics.collector.eventhandler.BaseEventHandler.insertDimension(BaseEventHandler.java:63)
    at com.plumtree.analytics.collector.eventhandler.BaseEventHandler.getUser(BaseEventHandler.java:198)
    at com.plumtree.analytics.collector.eventhandler.PageViewHandler.handle(PageViewHandler.java:71)
    at com.plumtree.analytics.collector.DataResolver.handleEvent(DataResolver.java:165)
    at com.plumtree.analytics.collector.DataResolver.run(DataResolver.java:126)
    Caused by: org.hibernate.MappingException: Unknown entity: com.plumtree.analytics.core.persist.BaseCustomEventDimension$$BeanGeneratorByCGLIB$$6a0493c4
    at org.hibernate.impl.SessionFactoryImpl.getEntityPersister(SessionFactoryImpl.java:569)
    at org.hibernate.impl.SessionImpl.getEntityPersister(SessionImpl.java:1086)
    at org.hibernate.event.def.AbstractSaveEventListener.saveWithGeneratedId(AbstractSaveEventListener.java:83)
    at org.hibernate.event.def.DefaultSaveOrUpdateEventListener.saveWithGeneratedOrRequestedId(DefaultSaveOrUpdateEventListener.java:184)
    at org.hibernate.event.def.DefaultSaveEventListener.saveWithGeneratedOrRequestedId(DefaultSaveEventListener.java:33)
    at org.hibernate.event.def.DefaultSaveOrUpdateEventListener.entityIsTransient(DefaultSaveOrUpdateEventListener.java:173)
    at org.hibernate.event.def.DefaultSaveEventListener.performSaveOrUpdate(DefaultSaveEventListener.java:27)
    at org.hibernate.event.def.DefaultSaveOrUpdateEventListener.onSaveOrUpdate(DefaultSaveOrUpdateEventListener.java:69)
    at org.hibernate.impl.SessionImpl.save(SessionImpl.java:481)
    at org.hibernate.impl.SessionImpl.save(SessionImpl.java:476)
    at com.plumtree.analytics.collector.cache.DimensionManager.insertDB(DimensionManager.java:266)
    ... 7 more
    In analyticsui.log, found the exception below:
    08 Jul 2010 06:50:25,910 ERROR Configuration - Could not compile the mapping document
    org.hibernate.MappingException: duplicate import: com.plumtree.analytics.core.persist.BaseCustomEventFact$$BeanGeneratorByCGLIB$$6a896b0d
    at org.hibernate.cfg.Mappings.addImport(Mappings.java:105)
    at org.hibernate.cfg.HbmBinder.bindPersistentClassCommonValues(HbmBinder.java:541)
    at org.hibernate.cfg.HbmBinder.bindClass(HbmBinder.java:488)
    at org.hibernate.cfg.HbmBinder.bindRootClass(HbmBinder.java:234)
    at org.hibernate.cfg.HbmBinder.bindRoot(HbmBinder.java:152)
    at org.hibernate.cfg.Configuration.add(Configuration.java:362)
    at org.hibernate.cfg.Configuration.addXML(Configuration.java:317)
    at com.plumtree.analytics.core.HibernateUtil.loadEventMappings(HibernateUtil.java:796)
    at com.plumtree.analytics.core.HibernateUtil.loadEventMappings(HibernateUtil.java:652)
    at com.plumtree.analytics.core.HibernateUtil.refreshCustomEvents(HibernateUtil.java:496)
    at com.plumtree.analytics.ui.common.AnalyticsInitServlet.init(AnalyticsInitServlet.java:104)
    at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1161)
    at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:981)
    at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4045)
    at org.apache.catalina.core.StandardContext.start(StandardContext.java:4351)
    at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791)
    at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771)
    at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:525)
    at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:920)
    at org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:883)
    at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:492)
    at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1138)
    at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:311)
    at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:117)
    at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053)
    at org.apache.catalina.core.StandardHost.start(StandardHost.java:719)
    at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045)
    at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443)
    at org.apache.catalina.core.StandardService.start(StandardService.java:516)
    at org.apache.catalina.core.StandardServer.start(StandardServer.java:710)
    at org.apache.catalina.startup.Catalina.start(Catalina.java:566)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:585)
    at com.plumtree.container.Bootstrap.start(Bootstrap.java:531)
    at com.plumtree.container.Bootstrap.main(Bootstrap.java:254)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:585)
    at org.tanukisoftware.wrapper.WrapperStartStopApp.run(WrapperStartStopApp.java:238)
    at java.lang.Thread.run(Thread.java:595)
    08 Jul 2010 06:50:25,915 ERROR Configuration - Could not configure datastore from XML
    org.hibernate.MappingException: duplicate import: com.plumtree.analytics.core.persist.BaseCustomEventFact$$BeanGeneratorByCGLIB$$6a896b0d
    at org.hibernate.cfg.Mappings.addImport(Mappings.java:105)
    at org.hibernate.cfg.HbmBinder.bindPersistentClassCommonValues(HbmBinder.java:541)
    at org.hibernate.cfg.HbmBinder.bindClass(HbmBinder.java:488)
    at org.hibernate.cfg.HbmBinder.bindRootClass(HbmBinder.java:234)
    at org.hibernate.cfg.HbmBinder.bindRoot(HbmBinder.java:152)
    at org.hibernate.cfg.Configuration.add(Configuration.java:362)
    at org.hibernate.cfg.Configuration.addXML(Configuration.java:317)
    at com.plumtree.analytics.core.HibernateUtil.loadEventMappings(HibernateUtil.java:796)
    at com.plumtree.analytics.core.HibernateUtil.loadEventMappings(HibernateUtil.java:652)
    at com.plumtree.analytics.core.HibernateUtil.refreshCustomEvents(HibernateUtil.java:496)
    at com.plumtree.analytics.ui.common.AnalyticsInitServlet.init(AnalyticsInitServlet.java:104)
    at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1161)
    at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:981)
    at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4045)
    at org.apache.catalina.core.StandardContext.start(StandardContext.java:4351)
    at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791)
    at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771)
    at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:525)
    at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:920)
    at org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:883)
    at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:492)
    at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1138)
    at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:311)
    at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:117)
    at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053)
    at org.apache.catalina.core.StandardHost.start(StandardHost.java:719)
    at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045)
    at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443)
    at org.apache.catalina.core.StandardService.start(StandardService.java:516)
    at org.apache.catalina.core.StandardServer.start(StandardServer.java:710)
    at org.apache.catalina.startup.Catalina.start(Catalina.java:566)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:585)
    at com.plumtree.container.Bootstrap.start(Bootstrap.java:531)
    at com.plumtree.container.Bootstrap.main(Bootstrap.java:254)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:585)
    at org.tanukisoftware.wrapper.WrapperStartStopApp.run(WrapperStartStopApp.java:238)
    at java.lang.Thread.run(Thread.java:595)
    wrapper_collector.log
    INFO | jvm 1 | 2009/11/10 17:25:22 | at com.plumtree.analytics.collector.eventhandler.PortletViewHandler.handle(PortletViewHandler.java:46)
    INFO | jvm 1 | 2009/11/10 17:25:22 | at com.plumtree.analytics.collector.DataResolver.handleEvent(DataResolver.java:165)
    INFO | jvm 1 | 2009/11/10 17:25:22 | at com.plumtree.analytics.collector.DataResolver.run(DataResolver.java:126)
    INFO | jvm 1 | 2009/11/10 17:25:22 | Caused by: java.sql.SQLException: [plumtree][Oracle JDBC Driver][Oracle]ORA-00001: unique constraint (ANALYTICSDBUSER.IX_USERBYUSERID) violated
    INFO | jvm 1 | 2009/11/10 17:25:22 |
    INFO | jvm 1 | 2009/11/10 17:25:22 | at com.plumtree.jdbc.base.BaseExceptions.createException(Unknown Source)

    Key words from the error msg suggests reinstallation of Analytics is needed to resolve this.Analytics database is failing to get updated with the correct event mapping and this is why no data is being inserted.
    "Could not insert dimension in the database",
    "ERROR Configuration - Could not configure datastore from XML
    org.hibernate.MappingException: duplicate import: com.plumtree.analytics.core.persist.BaseCustomEventFact$$BeanGeneratorByCGLIB$$6a896b0d"
    "ORA-00001: unique constraint (ANALYTICSDBUSER.IX_USERBYUSERID) violated",
    "ERROR Configuration - Could not compile the mapping document

  • Transformation Rule Type "Read from DataStore

    Hi All,
    i have two DSO's (Header and Item) my requirement is in the Item DSO i have a field Bill-to party in the same way in my header DSO also Bill-to party
    i need to fill Bill to party field in header DSO with Item DSO Field Bill-to party by using the rule type Read from Data store
    in the item DSO i have two key fields. in both the DSO's (header and item) only one common key field Document Num .i am assigning Docnum in transformation but i am failed to fill bill-to(Error-Cannot read from Datastore ). Please guide me how to achieve this.

    Hi.
    I think the problem is that the transformation rule needs the full target key fields (at item level) to be mapped in order to get the result value. Elsewhere, if more than one record are found more than one result values are to be found as well.
    It would work if you are reading Header DSO as all Items will get just one record as result.
    This can be solved using start/end routines ABAP programming.
    Hope this helps.
    regards.

  • How to get All Users from OID LDAP

    Hi all,
    I have Oracle Internet Directory(OID) and have created the users in it manually.
    Now I want to extract all the users from OID. How can I get Users from OID??
    Any response will be appritiated. If some one could show me demo code for that I shall be greatful to you.
    Thanks and reagards
    Pravy

    hi,
    the notes from metalink:
    bgards
    elvis
    Doc ID: Note:276688.1
    Subject: How to copy (export/import) the Portal database schemas of IAS 9.0.4 to another database
    Type: BULLETIN
    Status: PUBLISHED
    Content Type: TEXT/X-HTML
    Creation Date: 18-JUN-2004
    Last Revision Date: 05-AUG-2005
    How to copy (export/import) Portal database schemas of IAS 9.0.4 to another database
    Note 276688.1
    Download scripts Unix: Attachment 276688.1:1
    Download Perl scripts (Unix/NT) :Attachment 276688.1:2
    This article is being delivered in Draft form and may contain errors. Please use the MetaLink "Feedback" button to advise Oracle of any issues related to this article.
    HISTORY
    Version 1.0 : 24-JUN-2004: creation
    Version 1.1 : 25-JUN-2004: added a link to download the scripts from Metalink
    Version 1.2 : 29-JUN-2004: Import script: Intermedia indexes are recreated. Imported jobs are reassigned to Portal. ptlconfig replaces ptlasst.
    Version 1.3 : 09-JUL-2004: Additional updates. Usage of iasconfig.xml. Need only 3 environment variables to import.
    Version 1.4 : 18-AUG-2004: Remark about 9.2.0.5 and 10.1.0.2 database
    Version 1.5 : 26-AUG-2004: Duplicate job id
    Version 1.6 : 29-NOV-2004: Remark about WWC-44131 and WWSBR_DOC_CTX_54
    Version 1.7 : 07-JAN-2005: Attached perl scripts (for NT/Unix) at the end of the note
    Version 1.8 : 12-MAY-2005: added a work-around for the WWSTO_SESS_FK1 issue
    Version 1.9 : 07-JUL-2005: logoff trigger and 9.0.1 database export, import in 10g database
    Version 1.10: 05-AUG-2005: reference to the 10.1.2 note
    PURPOSE
    This document explains how to copy a Portal database schema from a database to another database.
    It allows restoring the Portal repository and the OID security associated with Portal.
    It can be used to go in production by copying physically a database from a development portal to a production environment and avoid to use the export/import utilities of Portal.
    This note:
    uses the export/import on the database level
    allows the export/import to be done between different platforms
    The script are Unix based and for the BASH shell. They can be adapted for other platforms.
    For the persons familiar with this technics in Portal 9.0.2, there is a list of the main differences with Portal 9.0.2 at the end of the note.
    These scripts are based on the experience of a lot of persons in Portal 902.
    The scripts are attached to the note. Download them here: Attachment 276688.1:1 : exp_schema_904.zip
    A new version of the script was written in Perl. You can also download them, here: Attachment 276688.1:2 : exp_schema_904_v2.zip. They do exactly the same than the bash ones. But they have the advantage of working on all platforms.
    SCOPE & APPLICATION
    This document is intented for Portal administrators. For using this note, you need basic DBA skills.
    This notes is for Portal 9.0.4.x only. The notes for Portal 9.0.2 are :
    Note 228516.1 : How to copy (export/import) Portal database schemas of IAS 9.0.2 to another database
    Note 217187.1 : How to restore a cold backup of a Portal IAS 9.0.2 on another machine
    The note for Portal 10.1.2 is:
    Note 330391.1 : How to copy (export/import) Portal database schemas of IAS 10.1.2 to another databaseMethod
    The method that we will follow in the document is the following one:
    Export:
    - export of the 4 portal schemas of a database (DEV / development)
    - export the LDAP OID users and groups (optional)
    Install a new machine with fresh IAS installation (PROD / production)
    Import:
    - delete the new and empty portal schema on PROD
    - import the schemas in the production database in place of the deleted schemas
    - import the LDAP OID users and groups (optional)
    - modify the configuration such that the infrastructure uses the portal repository of the backup
    - modify the configuration such that the portal repository uses the OID, webcache and SSO of the new infrastructure
    The export and the import are divided in several steps. All of these steps are included in 2 sample scripts:
    export : exp_portal_schema.sh
    import : imp_portal_schema.sh
    In the 2 scripts, all the steps are runned in one shot. It is just an example. Depending of the configuration and circonstance, all the steps can be runned independently.
    Convention
    Development (DEV) is the name of the machine where resides the copied database
    Production (PROD) is the name of the machine where the database is copied
    Prerequisite
    Some prerequisite first.
    A. Environment variables
    To run the import/export, you will need 3 environment variables. In the given scripts, they are defined in 'portal_env.sh'
    SYS_PASSWORD - the password of user sys in the Portal database
    IAS_PASSWORD - the password of IAS
    ORACLE_HOME - the ORACLE_HOME of the midtier
    The rest of the settings are found automatically by reading the iasconfig.xml file and querying the OID. It is done in 'portal_automatic_env.sh'. I wish to write a note on iasconfig.xml and the way to transform it in usefull environment variables. But it is not done yet. In the meanwhile, you can read the old 902 doc, that explains the meaning of most variables :
    < Note 223438.1 : Shell script to find your portal passwords, settings and place them in environment variables on Unix >
    B. Definition: Cutter database
    A 'Cutter Database' is the term used to designate a Database created by RepCA or OUI and that contains all the schemas used by a IAS 9.0.4 infrastructure. Even if in most cases, several schemas are not used.
    In Portal 9.0.4, the option to install only the portal repository in an empty database has been removed. It has been replaced by RepCA, a tool that creates an infrastructure database. Inside all the infrastucture database schemas, there are the portal schemas.
    This does not stop people to use 2 databases for running portal. One for OID and one for Portal. But in comparison with Portal 9.0.2, all schemas exist in both databases even if some are not used.
    The main idea of Cutter database is to have only 1 database type. And in the future, simplify the upgrades of customer installation
    For an installation where Portal and OID/SSO are in 2 separate databases, it looks like this
    Portal 9.0.2 Portal 9.0.4
    Infrastructure database
    (INFRA_SID)
    The infrastructure contains:
    - OID (used)
    - OEM (used)
    - Single Sign-on / orasso (used)
    - Portal (not used)
    The infrastructure contains:
    - OID (used)
    - OEM (used)
    - Single Sign-on / orasso (used)
    - Portal (not used)
    Portal database
    (PORTAL_SID)
    The custom Portal database contains:
    - Portal (used)
    The custom Portal database (is also an infrastructure):
    - OID (not used)
    - OEM (not used)
    - Single Sign-on / orasso (not used)
    - Portal (used)
    Whatever, the note will suppose there is only one single database. But it works also for 2 databases installation like the one explained above.
    C. Directory structure.
    The sample scripts given inside this note will be explained in the next paragraphs. But first, the scripts are done to use a directory structure that helps to classify the files.
    Here is a list of important files used during the process of export/import:
    File Name
    Description
    exp_portal_schema.sh
    Sample script that exports all the data needed from a development machine
    imp_portal_schema.sh
    Sample script that import all the data into a production machine
    portal_env.sh
    Script that defines the env variable specific to your system (to configure)
    portal_automatic_env.sh
    Helper script to get all the rest of the Portal settings automatically
    xsl
    Directory containing all the XSL files (helper scripts)
    del_authpassword.xsl
    Helper script to remove the authpassword tags in the DSML files
    portal_env_unix.sql
    Helper script to get Portal settings from the iasconfig.xml file
    exp_data
    Directory containing all the exported data
    portal_exp.dmp
    export on the database level of the portal, portal_app, ... database schemas
    iasconfig.xml
    copy the name of iasconfig.xml of the midtier of DEV. Used to get the hostname and port of Webcache
    portal_users.xml
    export from LDAP of the OID users used by Portal (optional)
    portal_groups.xml export from LDAP of the OID groups used by Portal (optional)
    imp_log
    Directory containing several spool and logs files generated during the import
    import.log Log file generated when running the imp command
    ptlconfig.log
    Log generated by ptlconfig when rewiring portal to the infrastructure.
    Some other spool files.
    D. Known limitations
    The scripts given in this note have the following known limitations:
    It does not copy the data stored in the SSO schema: external applications definitions and the passwords stored for them.
    See in the post steps: SSO migration to know how to do.
    The ssomig command resides in the Infrastructure Oracle home. And all commands of Portal in the Midtier home. And practically, these 2 Oracle homes are most of the time not on the same machine. This is the reason.
    The export of the users in OID exports from the default user location:
    ldapsearch .... -b "cn=users,dc=domain,dc=com"
    This is not 100% correct. The users are by default stored in something like "cn=users,dc=domain,dc=com". So, if the users are stored in the default location, it works. But if this location (user install base) is customized, it does not work.
    The reason is that such settings means that the LDAP most of the time highly customized. And I prefer that the administrator to copy the real LDAP himself. The right command will probably depend of the customer case. So, I prefered not to take the risk..
    orclCommonNicknameAttribute must match in the Target and Source OID .
    The orclCommonNicknameAttribute must match on both the source and target OID. By default this attribute is set to "uid", so if this has been changed, it must be changed in both systems.
    Reference Note 282698.1
    Migration of custom Java portlets.
    The script migrates all the data of Portal stored in the database. If you have custom java portlet deployed in your development machine, you will need to copy them in the production system.
    Step 1 - Export in Development (DEV)
    To export a full Portal installation to another machine, you need to follow 3 steps:
    Export at the database level the portal schemas + related schemas
    Get the midtier hostname and port of DEV
    Export of the users and groups with LDAPSEARCH in 2 XML files
    A script combining all the steps is available here.
    A. Export the 4 portals schemas (DEV)
    You need to export 3 types of database schemas:
    The 4 portal schemas created by default by the portal installation :
    portal,
    portal_app,
    portal_demo,
    portal_public
    The schemas where your custom database portlets / providers resides (if any)
    - The custom schemas you have created for storing your portlet / provider code
    The schemas where your custom tables resides. (if any)
    - Your custom schemas accessed by portal and containing only data (tables, views ...)
    You can get an approximate list of the schemas: default portal schemas (1) and database portlets schemas (2) with this query.
    SELECT USERNAME, DEFAULT_TABLESPACE, TEMPORARY_TABLESPACE
    FROM DBA_USERS
    WHERE USERNAME IN (user, user||'_PUBLIC', user||'_DEMO', user||'_APP')
    OR USERNAME IN (SELECT DISTINCT OWNER FROM WWAPP_APPLICATION$ WHERE NAME != 'WWV_SYSTEM');
    It still misses your custom schemas containing data only (3).
    We will export the 4 schemas and your custom ones in an export file with the user sys.
    Please, use a command like this one
    exp userid="'sys/change_on_install@dev as sysdba'" file=portal_exp.dmp grants=y log=portal_exp.log owner=(portal,portal_app,portal_demo,portal_public)The result is a dump file: 'portal_exp.dmp'. If you are using a database 9.2.0.5 or 10.1.0.2, the database of the exp/imp dump file has changed. Please read this.
    B. Hostname and port
    For the URL to access the portal, you need the 2 following infos to run the script 'imp_portal_schema.sh below :
    Webcache hostname
    Webcache listen port
    These values are contained in the iasconfig.xml file of the midtier.
    iasconfig.xml
    <IASConfig XSDVersion="1.0">
    <IASInstance Name="ias904.dev.dev_domain.com" Host="dev.dev_domain.com" Version="9.0.4">
    <OIDComponent AdminPassword="@BfgIaXrX1jYsifcgEhwxciglM+pXod0dNw==" AdminDN="cn=orcladmin" SSLEnabled="false" LDAPPort="3060"/>
    <WebCacheComponent AdminPort="4037" ListenPort="7782" InvalidationPort="4038" InvalidationUsername="invalidator" InvalidationPassword="@BR9LXXoXbvW1iH/IEFb2rqBrxSu11LuSdg==" SSLEnabled="false"/>
    <EMComponent ConsoleHTTPPort="1813" SSLEnabled="false"/>
    </IASInstance>
    <PortalInstance DADLocation="/pls/portal" SchemaUsername="portal" SchemaPassword="@BR9LXXoXbvW1c5ZkK8t3KJJivRb0Uus9og==" ConnectString="cn=asdb,cn=oraclecontext">
    <WebCacheDependency ContainerType="IASInstance" Name="ias904.dev.dev_domain.com"/>
    <OIDDependency ContainerType="IASInstance" Name="ias904.dev.dev_domain.com"/>
    <EMDependency ContainerType="IASInstance" Name="ias904.dev.dev_domain.com"/>
    </PortalInstance>
    </IASConfig>
    It corresponds to a portal URL like this:
    http://dev.dev_domain.com:7782/pls/portalThe script exp_portal_schema.sh copy the iasconfig.xml file in the exp_data directory.
    C. Export the security: users and groups (optional)
    If you use other Single Sing-On uses than the portal user, you probably need to restore the full security, the users and groups stored in OID on the production machine. 5 steps need to be executed for this operation:
    Export the OID entries with LDAPSEARCH
    Before to import, change the domain in the generated file (optional)
    Before to import, remove the 'authpassword' attributes from the generated files
    Import them with LDAPADD
    Update the GUID/DN of the groups in portal tables
    Part 1 - LDAPSEARCH
    The typical commands to do this operation look like this:
    ldapsearch -h $OID_HOSTNAME -p $OID_PORT -X -b "cn=portal.040127.1384,cn=groups,dc=dev_domain,dc=com" -s sub "objectclass=*" > portal_group.xml
    ldapsearch -h $OID_HOSTNAME -p $OID_PORT -X -D "cn=orcladmin" -w $IAS_PASSWORD -b "cn=users,dc=dev_domain,dc=com" -s sub "objectclass=inetorgperson" > portal_users.xmlTake care about the following points
    The groups are stored in a LDAP directory containing the date of installation
    ( in this example: portal.040127.1384,cn=groups,dc=dev_domain,dc=com )
    If the domain of dev and prod is different, the exported files contains the name of the development domain in the form of 'dc=dev_domain,dc=com' in a lot of place. The domain name needs to be replaced by the production domain name everywhere in the files.
    Ldapsearch uses the option '- X '. It it to export to DSML files (XML). It avoids a problem related with common LDAP files, LDIF files. LDIF files are wrapped at 78 characters. The wrapping to 78 characters make difficult to change the domain name contained in the LDIF files. XML files are not wrapped and do not have this problem.
    A sample script to export the 2 XML files is given here in : step 3 - export the users and groups (optional) of the export script.
    Part 2 : change the domain in the DSML files
    If the domain of dev and prod is different, the exported files contains the name of the development domain in the form of 'dc=dev_domain,dc=com' in a lot of place. The domain name need to be replaced by the production domain name everywhere in the files.
    To do this, we can use these commands:
    cat exp_data/portal_groups.xml | sed -e "s/$DEV_DN/$PROD_DN/" > imp_log/portal_groups.xml
    cat exp_data/portal_users.xml | sed -e "s/$DEV_DN/$PROD_DN/" > imp_log/temp_users.xml
    Part 3 : Remove the authpassword attribute
    The export of all attributes from the all users has also exported an automatically generated attribute in OID called 'authpassword'.
    'authpassword' is a list automatically generated passwords for several types of application. But mostly, it can not be imported. Also, there is no option in ldapsearch (that I know) that allows removing an attribute. In place of giving to the ldapsearch command the list of all the attributes that is very long, without 'authpassword', we will remove the attribute after the export.
    For that we will use the fact that the DSML files are XML files. There is a XSLT in the Oracle IAS, in the executable '$ORACLE_HOME/bin/xml'. XSLT is a standard specification of the internet consortium W3C to transform a XML file with the help of a XSL file.
    Here is the XSL file to remove the authpassword tag.
    del_autpassword.xsl
    <!--
    File : del_authpassword.xsl
    Version : 1.0
    Author : mgueury
    Description:
    Remove the authpassword from the DSML files
    -->
    <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
    <xml:output method="xml"/>
    <xsl:template match="*|@*|node()">
    <xsl:copy>
    <xsl:apply-templates select="*|@*|node()"/>
    </xsl:copy>
    </xsl:template>
    <xsl:template match="attr">
    <xsl:choose>
    <xsl:when test="@name='authpassword;oid'">
    </xsl:when>
    <xsl:when test="@name='authpassword;orclcommonpwd'">
    </xsl:when>
    <xsl:otherwise>
    <xsl:copy>
    <xsl:apply-templates select="*|@*|node()"/>
    </xsl:copy>
    </xsl:otherwise>
    </xsl:choose>
    </xsl:template>
    </xsl:stylesheet>
    And the command to make the transfomation:
    xml -f -s del_authpassword.xsl -o imp_log/portal_users.xml imp_log/temp_users.xmlWhere :
    imp_log/portal_users.xml is the final file without authpassword tags
    imp_log/temp_users.xml is the input file with the authpassword tags that can not be imported.
    Part 4 : LDAPADD
    The typical commands to do this operation look like this:
    ldapadd -h $OID_HOSTNAME -p $OID_PORT -D "cn=orcladmin" -w $IAS_PASSWORD -c -X portal_group.xml
    ldapadd -h $OID_HOSTNAME -p $OID_PORT -D "cn=orcladmin" -w $IAS_PASSWORD -c -X portal_users.xmlTake care about the following points
    Ldapadd uses the option ' -c '. Existing users/groups are generating an error. The option -c allows continuing and ignoring these errors. Whatever, the errors should be checked to see if it is just existing entries.
    A sample script to import the 2 XML files given in the step 5 - import the users and groups (optional) of the import script.
    Part 5 : Update the GUID/DN
    In Portal 9.0.4, the update of the GUID is taken care by PTLCONFIG during the import. (Import step 7)
    D. Example script for export
    Here is a example script that combines the 3 steps.
    Depending of you need, you will :
    or execute all the steps
    or just execute the 1rst one (export of the database users). It will be enough you just want to login with the portal user on the production instance.
    if your portal repository resides in a database 9.2.0.5 or 10.1.0.2, please read this
    you can download all the scripts here, Attachment 276688.1:1
    Do not forget to modify the script to your need and mostly add the list of users like explained in point A above.
    exp_portal_schema.sh
    # BASH Script : exp_portal_schema.sh
    # Version : 1.3
    # Portal : 9.0.4.0
    # History :
    # mgueury - creation
    # Description:
    # This script export a portal dump file from a dev instance
    # -------------------------- Environment variables --------------------------
    . portal_env.sh
    # In case you do not use portal_env.sh you have to define all the variables
    # For exporting the dump file only.
    # export SYS_PASSWORD=change_on_install
    # export PORTAL_TNS=asdb
    # For the security (optional)
    # export IAS_PASSWORD=welcome1
    # export PORTAL_USER=portal
    # export PORTAL_PASSWORD=A1b2c3de
    # export OID_HOSTNAME=development.domain.com
    # export OID_PORT=3060
    # export OID_DOMAIN_DN=dc=`echo $OID_HOSTNAME | cut -d '.' -f2,3,4,5,6 --output-delimiter=',dc='`
    # ------------------------------ Help function -----------------------------------
    function press_any_key() {
    if [ $PRESS_ANY_KEY_AFTER_EACH_STEP = "Y" ]; then
    echo
    echo Press enter to continue
    read $ANY_KEY
    else
    echo
    fi
    echo "------------------------------- Export ------------------------------------"
    # create a directory for the export
    mkdir exp_data
    # copy the env variables in the log just in case
    export > exp_data/exp_env_variable.txt
    echo "--------------------- step 1 - export"
    # export the portal users, but take care to add:
    # - your users containing DB providers
    # - your users containing data (tables)
    exp userid="'sys/$SYS_PASSWORD@$PORTAL_TNS as sysdba'" file=exp_data/portal_exp.dmp grants=y log=exp_data/portal_exp.log owner=(portal,portal_app,portal_demo,portal_public)
    press_any_key
    echo "--------------------- step 2 - store iasconfig.xml file of the MIDTIER"
    cp $MIDTIER_ORACLE_HOME/portal/conf/iasconfig.xml exp_data
    press_any_key
    echo "--------------------- step 3 - export the users and groups (optional)"
    # Export the groups and users from OID in 2 XML files (not LDIF)
    # The OID groups of portal are stored in GROUP_INSTALL_BASE that depends
    # of the installation date.
    # For the user, I use the default place. If it does not work,
    # you can find the user place with:
    # > exec dbms_output.put_line(wwsec_oid.get_user_search_base);
    # Get the GROUP_INSTALL_BASE used in security export
    sqlplus $PORTAL_USER/$PORTAL_PASSWORD@$PORTAL_TNS <<IASDB
    set serveroutput on
    spool exp_data/group_base.log
    begin
    dbms_output.put_line(wwsec_oid.get_group_install_base);
    end;
    IASDB
    export GROUP_INSTALL_BASE=`grep cn= exp_data/group_base.log`
    echo '--- Exporting Groups'
    echo 'creating portal_groups.xml'
    ldapsearch -h $OID_HOSTNAME -p $OID_PORT -X -s sub -b "$GROUP_INSTALL_BASE" -s sub "objectclass=*" > exp_data/portal_groups.xml
    echo '--- Exporting Users'
    echo 'creating portal_users.xml'
    ldapsearch -h $OID_HOSTNAME -p $OID_PORT -D "cn=orcladmin" -w $IAS_PASSWORD -X -s sub -b "cn=users,$OID_DOMAIN_DN" -s sub "objectclass=inetorgperson" > exp_data/portal_users.xml
    The script is done to run from the midtier.
    Step 2 - Install IAS in a new machine (PROD)
    A. Installation
    This note does not distinguish if Portal is sharing the same database than Single-Sign On and OID. For simplicity, I will speak only about 1 database. But I could also create a second infrastructure database just for the portal repository. This way is better for production system, because the Portal repository is only product used in the 2nd database. Having 2 separate databases allows taking easily backup of the portal repository.
    On the production machine, you need to install a fresh install of IAS 9.0.4. Take care to use :
    the same IAS patchset 9.0.4.1, 9.0.4.2, ...on the middle-tier and infrastruture than in development
    and same characterset than in development (or UTF8)
    The result will be 2 ORACLE_HOMES and 1 infrastructure database:
    the ORACLE_HOME of the infrastructure (SID:infra904)
    the ORACLE_HOME of the midtier (SID:ias904)
    an infrastructure database (SID:asdb)
    The empty new Portal install should work fine before to go to the next step.
    B. About tablespaces (optional)
    The size of the tablespace of the production should match the one of the Developement machine. If not, the tablespace will autoextend. It is not really a concern, but it is slow. You should modify the tablespaces for to have as much space on prod and dev.
    Also, it is safer to check that there is enough free space on the hard disk to import in the database.
    To modify the tablespace size, you can use Oracle Entreprise Manager console,
    On Unix, . oraenv
    infra904oemapp dbastudio
    On NT Start/ Programs/ Oracle Application server - infra904 / Enterprise Manager Console
    Launch standalone
    Choose the portal database (typically asdb.domain.com)
    Connect with a DBA user, sys or system
    Click Storage/Tablespaces
    Change the size of the PORTAL, PORTAL_DOC, PORTAL_LOGS, PORTAL_IDX tablespaces
    C. Backup
    It could be a good idea to take a backup of the MIDTIER and INFRASTRUCTURE Oracle Homes at that point to allow retesting the import process if it fails for any reason as much as you want without needing to reinstall everything.
    Step 3 - Import in production (on PROD)
    The following script is a sample of an Unix script that combines all the steps to import a portal repository to the production machine.
    To import a portal reporistory and his users and group in OID, you need to do 8 things:
    Stop the midtier to avoid errors while dropping the portal schema
    SQL*Plus with Portal
    Drop the 4 default portal schemas
    Create the portal users with the same passwords than the just deleted users and give them grants (you need to create your own custom shemas too if you have some).
    Import the dump file
    Import the users and groups into OID (optional)
    SQL*Plus with SYS : Post import changes
    Recompile everything in the database
    Reassign the imported jobs to portal
    SQL*Plus with Portal : Post import changes
    Recreate the Portal intermedia indexes
    Correct an import errror on wwsrc_preference$
    Make additional post import changes, by updating some portal tables, and replacing the development hostname, port or domain by the production ones.
    Rewire the portal repository with ptlconfig -dad portal
    Restart the midtier
    Here is a sample script to do this on Unix. You will need to adapt the script to your needs.
    imp_portal_schema.sh
    # BASH Script : imp_portal_schema.sh
    # Version : 1.3
    # Portal : 9.0.4.0
    # History :
    # mgueury - creation
    # Description:
    # This script import a portal dump file and relink it with an
    # infrastructure.
    # Script to be started from the MIDTIER
    # -------------------------- Environment variables --------------------------
    . portal_env.sh
    # Development and Production machine hostname and port
    # Example
    # .._HOSTNAME machine.domain.com (name of the MIDTIER)
    # .._PORT 7782 (http port of the MIDTIER)
    # .._DN dc=domain,dc=com (domain name in a LDAP way)
    # These values can be determined automatically with the iasconfig.xml file of dev
    # and prod. But if you do not know or remember the dev hostname and port, this
    # query should find it.
    # > select name, http_url from wwpro_providers$ where http_url like 'http%'
    # These variables are used in the
    # > step 4 - security / import OID users and groups
    # > step 6 - post import changes (PORTAL)
    # Set the env variables of the DEV instance
    rm /tmp/iasconfig_env.sh
    xml -f -s xsl/portal_env_unix.xsl -o /tmp/iasconfig_env.sh exp_data/iasconfig.xml
    . /tmp/iasconfig_env.sh
    export DEV_HOSTNAME=$WEBCACHE_HOSTNAME
    export DEV_PORT=$WEBCACHE_LISTEN_PORT
    export DEV_DN=dc=`echo $OID_HOSTNAME | cut -d '.' -f2,3,4,5,6 --output-delimiter=',dc='`
    # Set the env variables of the PROD instance
    . portal_env.sh
    export PROD_HOSTNAME=$WEBCACHE_HOSTNAME
    export PROD_PORT=$WEBCACHE_LISTEN_PORT
    export PROD_DN=dc=`echo $OID_HOSTNAME | cut -d '.' -f2,3,4,5,6 --output-delimiter=',dc='`
    # ------------------------------ Help function -----------------------------------
    function press_any_key() {
    if [ $PRESS_ANY_KEY_AFTER_EACH_STEP = "Y" ]; then
    echo
    echo Press enter to continue
    read $ANY_KEY
    else
    echo
    fi
    echo "------------------------------- Import ------------------------------------"
    # create a directory for the logs
    mkdir imp_log
    # copy the env variables in the log just in case
    export > imp_log/imp_env_variable.txt
    echo "--------------------- step 1 - stop the midtier"
    # This step is needed to avoid most case of ORA-01940: user connected
    # when dropping the portal user
    $MIDTIER_ORACLE_HOME/opmn/bin/opmnctl stopall
    press_any_key
    echo "--------------------- step 2 - drop and create empty users"
    sqlplus "sys/$SYS_PASSWORD@$PORTAL_TNS as sysdba" <<IASDB
    spool imp_log/drop_create_user.log
    ---- Drop users
    -- Warning: You need to stop all SQL*Plus connection to the
    -- portal schema before that else the drop will give an
    -- ORA-01940: cannot drop a user that is currently connected
    drop user portal_public cascade;
    drop user portal_app cascade;
    drop user portal_demo cascade;
    drop user portal cascade;
    ---- Recreate the users and give them grants"
    -- The new users will have the same passwords as the users we just dropped
    -- above. Do not forget to add your exported custom users
    create user portal identified by $PORTAL_PASSWORD default tablespace portal;
    grant connect,resource,dba to portal;
    create user portal_app identified by $PORTAL_APP_PASSWORD default tablespace portal;
    grant connect,resource to portal_app;
    create user portal_demo identified by $PORTAL_DEMO_PASSWORD default tablespace portal;
    grant connect,resource to portal_demo;
    create user portal_public identified by $PORTAL_PUBLIC_PASSWORD default tablespace portal;
    grant connect,resource to portal_public;
    alter user portal_public grant connect through portal;
    start $MIDTIER_ORACLE_HOME/portal/admin/plsql/wwv/wdbigra.sql portal
    exit
    IASDB
    press_any_key
    echo "--------------------- step 3 - import"
    imp userid="'sys/$SYS_PASSWORD@$PORTAL_TNS as sysdba'" file=exp_data/portal_exp.dmp grants=y log=imp_log/import.log full=y
    press_any_key
    echo "--------------------- step 4 - import the OID users and groups (optional)"
    # Some errors will be raised when running the ldapadd because at least the
    # default entries will not be able to be inserted. Remove them from the
    # ldif file if you want to avoid them. Due to the flag '-c', ldapadd ignores
    # duplicate entries. Another more radical solution is to erase all the entries
    # of the users and groups in OID before to run the import.
    # Replace the domain name in the XML files.
    cat exp_data/portal_groups.xml | sed -e "s/$DEV_DN/$PROD_DN/" > imp_log/portal_groups.xml
    cat exp_data/portal_users.xml | sed -e "s/$DEV_DN/$PROD_DN/" > imp_log/temp_users.xml
    # Remove the authpassword attributes with a XSL stylesheet
    xml -f -s xsl/del_authpassword.xsl -o imp_log/portal_users.xml imp_log/temp_users.xml
    echo '--- Importing Groups'
    ldapadd -h $OID_HOSTNAME -p $OID_PORT -D "cn=orcladmin" -w $IAS_PASSWORD -c -X imp_log/portal_groups.xml -v
    echo '--- Importing Users'
    ldapadd -h $OID_HOSTNAME -p $OID_PORT -D "cn=orcladmin" -w $IAS_PASSWORD -c -X imp_log/portal_users.xml -v
    press_any_key
    echo "--------------------- step 5 - post import changes (SYS)"
    sqlplus "sys/$SYS_PASSWORD@$PORTAL_TNS as sysdba" <<IASDB
    spool imp_log/sys_post_changes.log
    ---- Recompile the invalid packages"
    -- On the midtier, the script utlrp is not present. This step
    -- uses a copy of it stored in patch/utlrp.sql
    select count(*) INVALID_OBJECT_BEFORE from all_objects where status='INVALID';
    start patch/utlrp.sql
    set lines 999
    select count(*) INVALID_OBJECT_AFTER from all_objects where status='INVALID';
    ---- Jobs
    -- Reassign the JOBS imported to PORTAL. After the import, they belong
    -- incorrectly to the user SYS.
    update dba_jobs set LOG_USER='PORTAL', PRIV_USER='PORTAL' where schema_user='PORTAL';
    commit;
    exit
    IASDB
    press_any_key
    echo "--------------------- step 6 - post import changes (PORTAL)"
    sqlplus $PORTAL_USER/$PORTAL_PASSWORD@$PORTAL_TNS <<IASDB
    set serveroutput on
    spool imp_log/portal_post_changes.log
    ---- Intermedia
    -- Recreate the portal indexes.
    -- inctxgrn.sql is missing from the 9040 CD-ROMS. This is the bug 3536937.
    -- Fixed in 9041. The missing script is contained in the downloadable zip file.
    start patch/inctxgrn.sql
    start $MIDTIER_ORACLE_HOME/portal/admin/plsql/wws/ctxcrind.sql
    ---- Import error
    alter table "WWSRC_PREFERENCE$" add constraint wwsrc_preference_pk
    primary key (subscriber_id, id)
    using index wwsrc_preference_idx1
    begin
    DBMS_RLS.ADD_POLICY ('', 'WWSRC_PREFERENCE$', 'WEBDB_VPD_POLICY',
    '', 'webdb_vpd_sec', 'select, insert, update, delete', TRUE,
    static_policy=>true);
    end ;
    ---- Modify tables with full URLs
    -- If the domain name of prod and dev are different, this step is really important.
    -- It modifies the portal tables that contains reference to the hostname or port
    -- of the development machine. (For more explanation: see Addional steps in the note)
    -- groups (dn)
    update wwsec_group$
    set dn=replace( dn, '$DEV_DN', '$PROD_DN' )
    update wwsec_group$
    set dn_hash = wwsec_api_private.get_dn_hash( dn )
    -- users (dn)
    update wwsec_person$
    set dn=replace( dn, '$DEV_DN', '$PROD_DN' )
    update wwsec_person$
    set dn_hash = wwsec_api_private.get_dn_hash( dn)
    -- subscriber
    update wwsub_model$
    set dn=replace( dn, '$DEV_DN', '$PROD_DN' ), GUID=':1'
    where dn like '%$DEV_DN%'
    -- preferences
    update wwpre_value$
    set varchar2_value=replace( varchar2_value, '$DEV_DN', '$PROD_DN' )
    where varchar2_value like '%$DEV_DN%'
    update wwpre_value$
    set varchar2_value=replace( varchar2_value, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
    where varchar2_value like '%$DEV_HOSTNAME:$DEV_PORT%'
    -- page url items
    update wwv_things
    set title_link=replace( title_link, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
    where title_link like '%$DEV_HOSTNAME:$DEV_PORT%'
    -- web providers
    update wwpro_providers$
    set http_url=replace( http_url, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
    where http_url like '%$DEV_HOSTNAME:$DEV_PORT%'
    -- html links created by the RTF editor inside text items
    update wwv_text
    set text=replace( text, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
    where text like '%$DEV_HOSTNAME:$DEV_PORT%'
    -- Portlet metadata nls: help URL
    update wwpro_portlet_metadata_nls$
    set help_url=replace( help_url, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
    where help_url like '%$DEV_HOSTNAME:$DEV_PORT%'
    -- URL items (There is a trigger on this table building absolute_url automatically)
    update wwsbr_url$
    set absolute_url=replace( absolute_url, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
    where absolute_url like '%$DEV_HOSTNAME:$DEV_PORT%'
    -- Things attributes
    update wwv_thingattributes
    set value=replace( value, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
    where value like '%$DEV_HOSTNAME:$DEV_PORT%'
    commit;
    exit
    IASDB
    press_any_key
    echo "--------------------- step 7 - ptlconfig"
    # Configure portal such that portal uses the infrastructure database
    cd $MIDTIER_ORACLE_HOME/portal/conf/
    ./ptlconfig -dad portal
    cd -
    mv $MIDTIER_ORACLE_HOME/portal/logs/ptlconfig.log imp_log
    press_any_key
    echo "--------------------- step 8 - restart the midtier"
    $MIDTIER_ORACLE_HOME/opmn/bin/opmnctl startall
    date
    Each step can generate his own errors due to a lot of factors. It is better to run the import step by step the first time.
    Do not forget to check the output of log files created during the various steps of the import:
    imp_log/drop_create_user.log
    Spool when dropping and recreating the portal users
    imp_log/import.log Import log file when importing the portal_exp.dmp file
    imp_log/sys_post_changes.log
    Spool when making post changes with SYS
    imp_log/portal_post_changes.log
    Spool when making post changes with PORTAL
    imp_log/ptlconfig.log
    Log file of ptconfig when rewiring the midtier
    Step 4 - Test
    A. Check the log files
    B. Test the website and see if it works fine.
    Step 5 - take a backup
    Take a backup of all ORACLE_HOME and DATABASES to prevent all hardware problems. You need to copy:
    All the files of the 2 ORACLE_HOME
    And all the database files.
    Step 6 - Additional steps
    Here are some additional steps.
    SSO external application ( that are part of the orasso schema and not imported yet )
    Page URL items ( they seems to store the full URL ) - included in imp_portal_schema.sh
    Web Providers ( the URL needs to be changed ) - included in imp_portal_schema.sh
    Text items edited with the RTF editor in IE and containing links - included in imp_portal_schema.sh
    Most of them are taken care by the "step 8 - post import changes". Except the first one.
    1. SSO import
    This script imports only Portal and the users/groups of OID. Not the list of the external application contained in the orasso user.
    In Portal 9.0.4, there is a script called SSOMIG that resides in $INFRA_ORACLE_HOME/sso/bin and allows to move :
    Definitions and user data for external applications
    Registration URLs and tokens for partner applications
    Connection information used by OracleAS Discoverer to access various data sources
    See:
    Oracle® Application Server Single Sign-On Administrator's Guide 10g (9.0.4) Part Number B10851-01
    14. Exporting and Importing Data
    2. Page items: the page URL items store the full URL.
    This is Bug 2661805 fixed in Portal 9.0.2.6.
    This following work-around is implemented in post import step of imp_portal_schema.sh
    -- page url items
    update wwv_things
    set title_link=replace( title_link, 'dev.dev_domain.com:7778', 'prod.prod_domain.com:7778' )
    where title_link like '%$DEV_HOSTNAME:$DEV_PORT%'
    2. Web Providers
    The URL to the Web providers needs also change. Like for the Page items, they contain the full path of the webserver.
    Or you can get the list of the URLs to change with this query
    select name, http_url from PORTAL.WWPRO_PROVIDERS$ where http_url like '%';
    This following work-around is implemented in post import step of imp_portal_schema.sh
    -- web providers
    update wwpro_providers$
    set http_url=replace( http_url, 'dev.dev_domain.com:7778', 'prod.prod_domain.com:7778' )
    where http_url like '%$DEV_HOSTNAME:$DEV_PORT%'
    4. The production and development machine do not share the same domain
    If the domain of the production and the development are not the same, the DN (name in LDAP) of all users needs to change.
    Let's say from
    dc=dev_domain,dc=com -> dc=prod_domain,dc=com
    1. before to upload the ldif files. All the strings in the 2 ldifs files that contain 'dc=dev_domain,dc=com', have to be replaced by 'dc=prod_domain,dc=com'
    2. in the wwsec_group$ and wwsec_person$ tables in portal, the DN need to change too.
    This following work-around is implemented in post import step of imp_portal_schema.sh
    -- groups (dn)
    update wwsec_group$
    set dn=replace( dn, 'dc=dev_domain,dc=com', 'dc=prod_domain,dc=com' )
    update wwsec_group$
    set dn_hash = wwsec_api_private.get_dn_hash( dn )
    -- users (dn)
    update wwsec_person$
    set dn=replace( dn, 'dc=dev_domain,dc=com', 'dc=prod_domain,dc=com' )
    update wwsec_person$
    set dn_hash = wwsec_api_private.get_dn_hash( dn)
    5. Text items with HTML links
    Sometimes people stores full URL inside their text items, it happens mostly when they use link with the RichText Editor in IE .
    This following work-around is implemented in post import step in imp_portal_schema.sh
    -- html links created by the RTF editor inside text items
    update wwv_text
    set text=replace( text, 'dev.dev_domain.com:7778', 'prod.prod_domain.com:7778' )
    where text like '%$DEV_HOSTNAME:$DEV_PORT%'
    6. OID Custom password policy
    It happens quite often that the people change the password policy of the OID server. The reason is that with the default policy, the password expires after 60 days. If so, do not forget to make the same changes in the new installation.
    PROBLEMS
    1. Import log has some errors
    A. EXP-00091 -Exporting questionable statistics
    You can ignore this error.
    B. IMP-00017 - WWSRC_PREFERENCE$
    When importing, there is one import error:
    IMP-00017: following statement failed with ORACLE error 921:
    "ALTER TABLE "WWSRC_PREFERENCE$" ADD "
    IMP-00003: ORACLE error 921 encountered
    ORA-00921: unexpected end of SQL commandThe primary key is not created. You can create it with this commmand
    in SQL*Plus with the user portal.. Then readd the missing VPD policy.
    alter table "WWSRC_PREFERENCE$" add constraint wwsrc_preference_pk
    primary key (subscriber_id, id)
    using index wwsrc_preference_idx1
    begin
    DBMS_RLS.ADD_POLICY ('', 'WWSRC_PREFERENCE$', 'WEBDB_VPD_POLICY',
    '', 'webdb_vpd_sec', 'select, insert, update, delete', TRUE,
    static_policy=>true);
    end ;
    Step 8 in the script "imp_portal_schema.sh" take care of this. This can also possibly be solved by the
    C. IMP-00017 - WWDAV$ASL
    . importing table "WWDAV$ASL"
    Note: table contains ROWID column, values may be obsolete 113 rows importedThis error is normal, the table really contains a ROWID column.
    D. IMP-00041 - Warning: object created with compilation warnings
    This error is normal too. The packages giving these error have
    dependencies on package not yet imported. A recompilation is done
    after the import.
    E. ldapadd error 'cannot add add entries containing authpasswords'
    # ldap_add: DSA is unwilling to perform
    # ldap_add: additional info: You cannot add entries containing authpasswords.
    "authpasswords" are automatically generated values from the real password of the user stored in userpassword. These values do not have to be exported from ldap.
    In the import script, I remove the additional tag with a XSL stylesheet 'del_authpassword.xsl'. See above.
    F. IMP-00017: WWSTO_SESSION$
    IMP-00017: following statement failed with ORACLE error 2298:
    "ALTER TABLE "WWSTO_SESSION$" ENABLE CONSTRAINT "WWSTO_SESS_FK1""
    IMP-00003: ORACLE error 2298 encountered
    ORA-02298: cannot validate (PORTAL.WWSTO_SESS_FK1) - parent keys not found
    Here is a work-around for the problem. I will normally integrate it in a next version of the scripts.
    SQL> delete from WWSTO_SESSION_DATA$;
    7690 rows deleted.
    SQL> delete from WWSTO_SESSION$;
    1073 rows deleted.
    SQL> commit;
    Commit complete.
    SQL> ALTER TABLE "WWSTO_SESSION$" ENABLE CONSTRAINT "WWSTO_SESS_FK1";
    Table altered.
    G. IMP-00017 - ORACLE error 1 - DBMS_JOB.ISUBMIT
    This error can appear during the import when the import database is not empty and is already customized for some reasons. For example, you export from an infrastructure and you import in a database with a lot of other programs that uses jobs. And unhappily the same job id.
    Due to the way the export/import of jobs is done, the jobs keeps their id after the import. And they may conflict.
    IMP-00017: following statement failed with ORACLE error 1: "BEGIN DBMS_JOB.ISUBMIT(JOB=>42,WHAT=>'begin execute immediate " "''begin wwutl_cache_sys.process_background_inval; end;'' ; exc" "eption when others then wwlog_api.log(p_domain=> ''utl'', " " p_subdomain=>''cache'', p_name=>''background'', " " p_action=>''process_background_inval'', p_information => ''E" "rror in process_background_inval ''|| sqlerrm);end;', NEXT_DATE=" ">TO_DATE('2004-08-19:17:32:16','YYYY-MM-DD:HH24:MI:SS'),INTERVAL=>'SYSDATE " "+ 60/(24*60)',NO_PARSE=>TRUE); END;"
    IMP-00003: ORACLE error 1 encountered ORA-00001: unique constraint (SYS.I_JOB_JOB) violated
    ORA-06512: at "SYS.DBMS_JOB", line 97 ORA-06512: at line 1
    Solutions:
    1. use a fresh installed database,
    2. Due that the jobs conflicting are different because it happens only in custom installation, there is no clear rule. But you can
    recreate the jobs lost after the import with other_ids
    and/or change the job id of the other program before to import. This type of commands can help you (you need to do it with SYS)
    select * from dba_jobs;
    update dba_jobs set job=99 where job=52;
    commit
    2. Import in a RAC environment
    Be aware of the Bug 2479882 when the portal database is in a RAC database.
    Bug 2479882 : NEEDED TO BOUNCE DB NODES AFTER INSTALLING PORTAL 9.0.2 IN RAC NODE3. Intermedia
    After importing a environment, the intermedia indexes are invalid. To correct the error you need to run in SQL*Plus with Portal
    start $MIDTIER_ORACLE_HOME/portal/admin/plsql/wws/inctxgrn.sql
    start $MIDTIER_ORACLE_HOME/portal/admin/plsql/wws/ctxcrind.sql
    But $MIDTIER_ORACLE_HOME/portal/admin/plsql/wws/inctxgrn.sql is missing in IAS 9.0.4.0. This is Bug 3536937. Fixed in 9041. The missing scripts are contained in the downloadable zip file (exp_schema904.zip : Attachment 276688.1:1 ), directory sql. This means that practically in 9040, you have to run
    start sql/inctxgrn.sql
    start $MIDTIER_ORACLE_HOME/portal/admin/plsql/wws/ctxcrind.sql
    In the import script, it is done in the step 6 - recreate Portal Intermedia indexes.
    You can not WA the problem without the scripts. Running ctxcrind.sql alone does not work. You will have this error:
    ORA-06510: PL/SQL: unhandled user-defined exception
    ORA-06512: at "PORTAL.WWERR_API_EXCEPTION", line 164
    ORA-06512: at "PORTAL.WWV_CONTEXT", line 1035
    ORA-06510: PL/SQL: unhandled user-defined exception
    ORA-06512: at "PORTAL.WWERR_API_EXCEPTION", line 164
    ORA-06512: at "PORTAL.WWV_CONTEXT", line 476
    ORA-06510: PL/SQL: unhandled user-defined exception
    ORA-20000: Oracle Text error:
    DRG-12603: CTXSYS does not own user datastore procedure: WWSBR_THING_CTX_69
    ORA-06512: at line 13
    4. ptlconfig
    If you try to run ptlconfig simply after an import you will get an error:
    Problem processing Portal instance: Configuring HTTP server settings : Installing cache data : SQL exception: ERROR: ORA-23421: job number 32 is not a job in the job queue
    This is because the import done by user SYS has imported the PORTAL jobs to the SYS schema in place of portal. The solution is to run
    update dba_jobs set LOG_USER='PORTAL', PRIV_USER='PORTAL' where schema_user='PORTAL';
    In the import script, it is done in the step 8 - post import changes.
    5. WWC-41417 - invalid credentials.
    When you try to login you get:
    Unexpected error encountered in wwsec_app_priv.process_signon (User-Defined Exception) (WWC-41417)
    An exception was raised when accessing the Oracle Internet Directory: 49: Invalid credentials
    Details
    Error:Operation: dbms_ldap.simple_bind_s
    OID host: machine.domain.com
    OID port number: 4032
    Entry DN: orclApplicationCommonName=PORTAL,cn=Portal,cn=Products,cn=OracleContext. (WWC-41743)Solution:
    - run secupoid.sql
    - rerun ptlconfig
    This problem has been seen after using ptlasst in place of ptlconfig.
    6. EXP-003 with a database 9.2.0.5 or 10.1.0.2
    In fact, the DB format of imp/exp has changed in 9.2.0.5 or 10.1.0.2. The EXP-3 error only occurs when the export from the 9.2.0.5.0 or 10.1.0.2.0 database is done with a lower release export utility, e.g. 9.2.0.4.0.
    Due to the way this note is written, the imp/exp utility used is the one of the midtier (9014), if your portal resides in a 9.2.0.5 database, it will not work. To work-around the problem, there are 2 solutions:
    Change the script so that it uses the exp and imp command of database.
    Make a change to the 9.2.0.5 or 10.1.0.2 database to make them compatible with previous version. The change is to modify a database internal view before to export/import the data.
    A work-around is given in Bug 3784697
    1. Make a note of the export definition of exu9tne from
    $OH/rdbms/admin/catexp.sql
    2. Copy this to a new file and add "UNION ALL select * from sys.exu9tneb" to the end of the definition
    3. Run this as sys against the DB to be exported.
    4. Export as required
    5. Put back the original definition of exu9tne
    eg: For 9204 the workaround view would be:
    CREATE OR REPLACE VIEW exu9tne (
    tsno, fileno, blockno, length) AS
    SELECT ts#, segfile#, segblock#, length
    FROM sys.uet$
    WHERE ext# = 1
    UNION ALL
    select * from sys.exu9tneb
    7. EXP-00006: INTERNAL INCONSISTENCY ERROR
    This is Bug 2906613.
    The work-around given in this bug is the following:
    - create the following view, connected as sys, before running export:
    CREATE OR REPLACE VIEW exu8con (
    objid, owner, ownerid, tname, type, cname,
    cno, condition, condlength, enabled, defer,
    sqlver, iname) AS
    SELECT o.obj#, u.name, c.owner#, o.name,
    decode(cd.type#, 11, 7, cd.type#),
    c.name, c.con#, cd.condition, cd.condlength,
    NVL(cd.enabled, 0), NVL(cd.defer, 0),
    sv.sql_version, NVL(oi.name, '')
    FROM sys.obj$ o, sys.user$ u, sys.con$ c,
    sys.cdef$ cd, sys.exu816sqv sv, sys.obj$ oi
    WHERE u.user# = c.owner# AND
    o.obj# = cd.obj# AND
    cd.con# = c.con# AND
    cd.spare1 = sv.version# (+) AND
    cd.enabled = oi.obj# (+) AND
    NOT EXISTS (
    SELECT owner, name
    FROM sys.noexp$ ne
    WHERE ne.owner = u.name AND
    ne.name = o.name AND
    ne.obj_type = 2)
    The modification of exu8con simply adds support for a constraint type that had not previously been supported by this view. There is no negative impact.
    8. WWSBR_DOC_CTX_54 is invalid
    After the recompilation of the package, one package remains invalid (in sys_post_changes.log):
    INVALID_OBJECT_AFTER
    1
    select owner, object_name from all_objects where status='INVALID'
    CTXSYS WWSBR_DOC_CTX_54
    CREATE OR REPLACE procedure WWSBR_DOC_CTX_54
    (rid in rowid, bilob in out NOCOPY blob)
    is begin PORTAL.WWSBR_CTX_PROCS.DOC_CTX(rid,bilob);end;
    This object is not used anymore by portal. The error can be ignored. The procedure can be removed too. This is Bug 3559731.
    9. You do not have permission to perform this operation. (WWC-44131)
    It seems that there are problems if
    - groups on the production machine are not residing in the default place in OID,
    - and that the group creation base and group search base where changed.
    After this, the cloning of the repository work without problem. But it seems that the command 'ptlconfig -dad portal' does not reset the GUID and DN of the groups correctly. I have not checked this yet.
    The solution seems to use the script given in the 9.0.2 Note 228516.1. And run group_sec.sql to reset all the DN and GUID in the copied instance.
    10. Invalid Java objects when exporting from a 9.x database and importing in a 10g database
    If you export from a 9.x database and import in a 10g database, after running utlrp.sql, 18 Java objects will be invalid.
    select object_name, object_type from user_objects where status='INVALID'
    SQL> /
    OBJECT_NAME OBJECT_TYPE
    /556ab159_Handler JAVA CLASS
    /41bf3951_HttpsURLConnection JAVA CLASS
    /ce2fa28e_ProviderManagerClien JAVA CLASS
    /c5b98d35_ServiceManagerClient JAVA CLASS
    /d77cf2ab_SOAPServlet JAVA CLASS
    /649bf254_JavaProvider JAVA CLASS
    /a9164b8b_SpProvider JAVA CLASS
    /2ee43ac9_StatefulEJBProvider JAVA CLASS
    /ad45acec_StatelessEJBProvider JAVA CLASS
    /da1c4a59_EntityEJBProvider JAVA CLASS
    /66fdac3e_OracleSOAPHTTPConnec JAVA CLASS
    /939c36f5_OracleSOAPHTTPConnec JAVA CLASS
    org/apache/soap/rpc/Call JAVA CLASS
    org/apache/soap/rpc/RPCMessage JAVA CLASS
    org/apache/soap/rpc/Response JAVA CLASS
    /198a7089_Message JAVA CLASS
    /2cffd799_ProviderGroupUtils JAVA CLASS
    /32ebb779_ProviderGroupMgrProx JAVA CLASS
    18 rows selected.
    This is a known issue. This can be solved by applying patch one of the following patch depending of your IAS version.
    Bug 3405173 - PORTAL 9.0.4.0.0 PATCH FOR 10G DB UPGRADE (FROM 9.0.X AND 9.2.X)
    Bug 4100409 - PORTAL 9.0.4.1.0 PATCH FOR 10G DB UPGRADE (FROM 9.0.X AND 9.2.X)
    Bug 4100417 - PORTAL 9.0.4.2.0 PATCH FOR 10G DB UPGRADE (FROM 9.0.X AND 9.2.X)
    11. Import : IMP-00003: ORACLE error 30510 encountered
    When importing Portal 9.0.4.x, it could be that the import of the database side produces an error ORA-30510.The new perl script work-around the issue in the portal_post_import.sql script. But not the BASH scripts. If you use the BASH scripts, after the import, please run this command manually in SQL*Plus logged as portal.
    ---- Import error 2 - ORA-30510 when importing
    CREATE OR REPLACE TRIGGER logoff_trigger
    before logoff on schema
    begin
    -- Call wwsec_oid.unbind to close open OID connections if any.
    wwsec_oid.unbind;
    exception
    when others then
    -- Ignore all the errors encountered while unbinding.
    null;
    end logoff_trigger;
    This is logged as <Bug;4458413>.
    12. Exporting from a 9.0.1 database and import in a 9.2.0.5+ or 10g DB
    It could be that when exporting from a 9.0.1 database to a 10g database that the java classes do not get compiled correctly. The following errors are seen
    ORA-29534: referenced object PORTAL.oracle/net/www/proto/https/HttpsURLConnection could not be resolved
    errors:: class oracle/net/www/proto/https/HttpsURLConnection
    ORA-29521: referenced name oracle/security/ssl/OracleSSLSocketFactoryImpl could not be found
    ORA-29521: referenced name oracle/security/ssl/OracleSSLSocketFactory could not be found
    In such a case, please apply the following patches after the import in the 10g database.
    Bug 3405173 PORTAL REPOS DB UPGRADE TO 10G: for Portal 9.0.4.0
    Bug 4100409 PORTAL REPOS DB UPGRADE TO 10G: for Portal 9.0.4.1
    Main Differences with Portal 9.0.2
    For the persons used to this technics in Portal 9.0.2, you could be interested to read the main differences with the same note for Portal 9.0.2
    Portal 9.0.2
    Portal 9.0.4
    Cutter database
    Portal 9.0.2 can be part of an infrastructure database or in a custom external database.
    In Portal 9.0.2, the portal schema is imported in an empty database.
    Portal 9.0.4 can only be installed in a 'Cutter database', a database created with RepCA or OUI containing always OID, DCM and so on...
    In Portal 9.0.4, the portal schema is imported in an 'Cutter database' (new)
    group_sec.sql
    group_sec.sql is used to correct the GUIDs of OID stored in Portal
    ptlconfig -dad portal -oid is used to correct the GUIDs of OID stored in Portal (new)
    1 script
    The import / export are divided by several steps with several scripts
    The import script is done in one step
    Additional steps are included in the script
    This requires to know the hostname and port of the original development machine. (new)
    Import
    The steps are:
    creation of an empty database
    creation of the users with password=username
    import
    The steps are:
    creation of an IAS 10g infrastructure DB (repca or OUI)
    deletion of new portal schemas (new)
    creation of the users with the same password than the schemas just dropped.
    import
    DAD
    The dad needed to be changed
    The passwords are not changed, the dad does not need to be changed.
    Bugs
    In portal 9.0.2, 2 bugs were workarounded by change_host.sh
    In Portal 9.0.4, some tables additional tables needs to be updated manually before to run ptlasst. This is #Bug:3762961#.
    export of LDAP
    The export is done in LDIF files. If the prod and the dev have different domain, it is quite difficult to change the domain name in these file due to the line wrapping at 78 characters.
    The export is done in XML files, in the DSML format (new). It is a lot easier to change the XML files if the domain name is different from PROD to DEV.
    Download
    You have to cut and paste the scripts
    The scripts are attached to the note. Just donwload them.
    Rewiring
    9.0.2 uses ptlasst.
    ptlasst.csh -mode MIDTIER -i custom -s $PORTAL_USER -sp $PORTAL_PASSWORD -c $PORTAL_HOSTNAME:$PORTAL_DB_PORT:$PORTAL_SERVICE_NAME -sdad $PORTAL_DAD -o orasso -op $ORASSO_PASSWORD -odad orasso -host $MIDTIER_HOSTNAME -port $MIDTIER_HTTP_PORT -ldap_h $INFRA_HOSTNAME -ldap_p $OID_PORT -ldap_w $IAS_PASSWORD -pwd $IAS_PASSWORD -sso_c $INFRA_HOSTNAME:$INFRA_DB_PORT:$INFRA_SERVICE_NAME -sso_h $INFRA_HOSTNAME -sso_p $INFRA_HTTP_PORT -ultrasearch -oh $MIDTIER_ORACLE_HOME -mc false -mi true -chost $MIDTIER_HOSTNAME -cport_i $WEBCACHE_INV_PORT -cport_a $WEBCACHE_ADM_PORT -wc_i_pwd $IAS_PASSWORD -emhost $INFRA_HOSTNAME -emport $EM_PORT -pa orasso_pa -pap $ORASSO_PA_PASSWORD -ps orasso_ps -pp $ORASSO_PS_PASSWORD -iasname $IAS_NAME -verbose -portal_only
    9.0.4 uses ptlconfig (new)
    ptlconfig -dad portal
    Environment variables
    A lot of environment variables are needed
    Just 3 environment variables are needed:
    - password of SYS
    - password of IAS,
    - ORACLE_HOME of the Midtier
    All the rest is found in iasconfig.xml and LDAP (new)
    TO DO
    - Check if the orclcommonapplication name fits SID.hostname
    - Check what gives the import of a portal30 upgraded schema inside a schema named portal
    - Explain how to copy the portal*.dbf files in place of export/import and the limitation of tra

Maybe you are looking for

  • How do I make one i-movie out of 10 different i-movies?

    I have 10 small one minute long i-movies. I want to make them into one single i-movie with 10 chapters. How can I do that? When I try to import (or drag in) my other i-movies I get this error: +The file could not be imported: The file "xxx" can't be

  • Cannot Connect to the iTunes Store. Request Resource was not found

    I hope you can help me. I get the following error when trying to connect to the iTunes store. iTunes could not connect to the iTunes store, the requested resource was not found. I am running Norton Internet security and have allowed all permissions f

  • MS SQL server problems on Tecra A9

    We have two new A9's neither of which will run Outlook 2007 BCM set up as it appears that the MSSMLBIZ instance of SQL server on the laptops won't start. Starting the service manually results in getting an error code 17058. Reinstalling Office, Outlo

  • Preloader problem - feel like an idiot

    I feel like a total idiot. I can't seem to get my preloader to work. Here's my code. What am I doing wrong? The problem is that myApp.swf loads successfully but the .onEnterFrame function thinks that loader_mc is 1 frame and 0 bytes long.

  • Can I save Shuffle content as a playlist

    I am wondering if I can save the songs on my Shuffle as a playlist in iTunes? Power Book   Mac OS X (10.4.7)