Where to deploy iFS for PROD env: mid-tier or inf-tier?

Mid and infra tiers are on separate servers. Does it make more sense to deploy iFS onto the mid or infra tiers from a performance standpoint?
Brian

Brian,
i guess it depends more upon security than performance. Depending upon what protocols you're maiking available will determine where to put the iFS server. Obviously you'll also need a 'fat' connection between you iFs and database machines. If all (public) access is through a web application then most performance will be gained from having app. servers with large memory and fast processors rather than the performance of the iFS server.

Similar Messages

  • Help needed regarding the deployment architecture for PROD env

    Dear All,
    Please help me with some clarifications regarding the deployment architecture for PROD env.
    As of now I have 2 single node 12.1.1 installations for DEV and CRP/TEST respectively.
    Shortly I will be having a PROD env of 12.1.1 with one DB node and 2 middle tier (apps) node. I need help in whether -
    1) to have a shared APPL_TOP in the SAN for the 2 apps node or to have seperate APPL_TOPs for the 2 apps node. The point is that which will be benificitial in my case from business point of view. The INST_TOPS will be node specific in any case right?
    2) Where to enable the Concurrent Managers, in the DB node or in the primary apps node or in 2 apps node both for better performance.
    12.1.1 is installed in RHEL 5.3
    Thanks and Regards

    Hi,
    Please refer to (Note: 384248.1 - Sharing The Application Tier File System in Oracle E-Business Suite Release 12).
    For enabling the CM, it depends on what resources you have on each server. I would recommend you install it on the the application tier node, and leave the database installed on one server with no application services (if possible).
    Regards,
    Hussein

  • Where should I put the conc mgr mid-tier or backend ?

    Hello All,
    Is there any advantage to putting the conc mgrs on the mid-tier as opposed to the database tier other than saving some memory usage on the database tier ? All the work would be carried out on the database tier wouldn't it ?
    thanks
    Dave

    Hi,
    What I was really looking for was, if there is some particular advantage I could gain from putting the mgrs on the mid-tier ? The main advatange is having all the application tier files on one node which make it easier to maintain than having the application tier nodes split on both tiers. In this case, you will have to patch the application only once (not on every node where APPL_TOP is presented).
    If I put the conc mgrs on the apps tier, this means that I would never need to apply patches to the backend, given that all the apps software is on the apps tier and the database software is on the database tier. If I have 2 apps tiers I could use pcp ? But since all the work is carried out on the db tier this (pcp) would be a pointless exercise right ? You cannot use PCP if you place all the application tier files on one node. If you want to have PCP, you will need to have multiple application tier nodes.
    Thanks,
    Hussein

  • How to deploy across multiple mid-tier instances

    I was hoping someone may be able to point me in the right direction. We have two portal/webcache mid-tier instances on release 10G AS 9.0.4.2. On the ias_admin home page I tried to create a cluster and add the two instances to a cluster but it won't let me do it for a portal type install. If that is the case, how do people handle deploying such things as portlet providers and data sources across multiple mid-tier instances? Here is the concern from our Portal admin:
    As discussed, the inability to cluster the mid-tiers will cause issues in production as we try and keep deployed portlet providers with their required configurations and datasources in synch. While deploying and configuring 1-2 mid-tiers for a given portlet manually is do-able, the chance for out of synch components or configuration settings is amplified for each additional mid-tier. If we manually deploy 5 mid-tiers and accidently mistype the datasource for example, we'll have a random error that will occur once requests are attempted to be serviced by that mid-tier. This will make troubleshooting difficult. We really need some way to deploy and configure settings one time and have these items propagated to all the other mid-tiers.
    What is the best way to handle this scenario?
    Regards and thanks for any info,
    Chris Schumacher
    Embry-Riddle Aeronautical University

    Jar files are not stored on the file system
    Temporary files are held in bpel/domain/default/tmp folder, whereas jar files in the dehydration store for clustered deployment.
    If you configure BPEL (not AS) cluster - files will be propagated to all nodes.
    MJ

  • Exact Match of Windows Fonts on Unix Mid-Tier for PDF, AFM License

    Having read the many postings and Metalink articles on this issue is it sounds like I need only the AFM files on the Unix side for Oracle Reports to accurately render the real-estate on the report (for PDF) and that the actual report/fonts will be rendered on the (Windows) client side. My question is: Does it violate the license of Microsoft fonts to generate the AFM files and deploy them on the Unix mid-tier since these are only the "metric files"? If not, has anybody actually purchased the licenses to put, say the Microsoft Office font set, on a Unix Oracle Reports box? Any problems or issues if you did?
    Thanks

    Thanks for the response Melissa
    Why did you abandon the installation on windows then?
    Could you get every component working on windows prior to moving over to Linux?
    If you had everything working on Windows could you please tell me what setup you had i.e. what operating system, how much RAM etc.
    I know you kindly gave your number so I could call you about this but I'll have to check with my boss before calling as I notice you are calling from the US.
    We are only a very small oracle software development firm (in the UK) looking to add Collaboration Suite to our Portfolio.
    Thanks again
    Charlie

  • CPUApr2007 Install - Inf/Mid-tier or both

    I have a question.
    I am getting ready to apply patches from the CPUApr2007. I am running Application Server 10.1.2.0.2 with 10.1.0.5 database. I am on Linux x86.
    My question is:
    How do I know if the patch pertains to the infrastructure and mid-tier or if it is just one of them.
    Patch 5901877 applies to the database, however I have seen posts where it has been applied to the mid-tier also.
    patch 5922120 - Has forms, reports and database. Has database bug fixes in it, and has bugs listed that are also listed in 5901877.
    Do I apply both patches, applying 5901877 first, to both or just infrastructure? Then apply 5922120 to both infrastructure and mid-tier?
    Thank you in advance.

    Brian,
    i guess it depends more upon security than performance. Depending upon what protocols you're maiking available will determine where to put the iFS server. Obviously you'll also need a 'fat' connection between you iFs and database machines. If all (public) access is through a web application then most performance will be gained from having app. servers with large memory and fast processors rather than the performance of the iFS server.

  • Bpel deployment fails for all processes that have revision other than 1.0.

    Using: Release *10.1.3.3.1*
    Hello All,
    Bpel deployment fails for all processes that have revision other than *1.0*.
    We have been attempting to deploy several BPEL projects via ANT script to a target environment and are encountering failures to deploy for every project which isn’t a (revision 1.0). We are getting the following error whenever we try to deploy a process with a revision other than 1.0:
    D:\TJ_AutoDeploy\BPEL_AutoDeploy_BETA\build.xml:65: BPEL archive doesnt exist in directory "{0}"
         at com.collaxa.cube.ant.taskdefs.DeployRemote.getJarFile(DeployRemote.java:254)
         at com.collaxa.cube.ant.taskdefs.DeployRemote.deployProcess(DeployRemote.java:409)
         at com.collaxa.cube.ant.taskdefs.DeployRemote.execute(DeployRemote.java:211)
         at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:275)
         at org.apache.tools.ant.Task.perform(Task.java:364)
         at org.apache.tools.ant.Target.execute(Target.java:341)
         at org.apache.tools.ant.Target.performTasks(Target.java:369)
         at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1216)
         at org.apache.tools.ant.Project.executeTarget(Project.java:1185)
         at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:40)
         at org.apache.tools.ant.Project.executeTargets(Project.java:1068)
         at org.apache.tools.ant.Main.runBuild(Main.java:668)
         at org.apache.tools.ant.Main.startAnt(Main.java:187)
         at org.apache.tools.ant.launch.Launcher.run(Launcher.java:246)
         at org.apache.tools.ant.launch.Launcher.main(Launcher.java:67)
    The structure of our automated deployment script is as follows:
    First, a batch script calls (Jdeveloper_BPEL_Prompt.bat) in order to set all necessary environment variables i.e. ORACLE_HOME, BPEL_HOME, ANT_HOME, etc for ant.
    Next, the script lists every .jar file within the directory to an .ini file called BPEL_List.ini. Furthermore, BPEL_DIR, ADMIN_USER and ADMIN_PSWD variables are set and initialized respectively to:
    -     “.” – point to directory where script is running from because all the BPEL processes are located here
    -     “oc4jadmin”
    -     “*********” (whatever the password for out environment is)
    We’ve developed a method to have the script prompt the user to select the target environment to deploy to. Once the user selects the appropriate environment, the script goes through the BPEL_List.ini files and a loop tells it that for every BPEL process listed:
    DO ant
    -Dprocess.name=%%b
    -Drev= !Rev!
    -Dpath=%BPEL_DIR%
    -Ddomain=default
    -Dadmin.user=%ADMIN_USER%
    -Dadmin.password=%ADMIN_PWD%
    -Dhttp.hostname=%HOST%
    -Dhttp.port=%PORT%
    -Dverbose=true
    (What’s happening is that the variables in the batch file are being passed on to the ANT script where *%%b* is the process name, !rev! is revision #, and so on…)
    The loop goes through each line in the BPEL_List.ini and tokenizes the BPEL process into 3 parts *(%%a, %%b, and %%c)* but we only extract 2 parts: *%%b* (process name) and *%%c* which becomes !Rev! (revision number).
    Example:
    Sample BPEL process:
    bpel_ThisIsProcess1_1.0.jar
    bpel_ThisIsProcess2_SOAv2.19.0.001B.jar
    After tokenizing:
    %%a     %%b     %%c
    bpel     ThisIsProcess1     1.0.jar
    bpel     ThisIsProcess2     SOAv2.19.0.001B.jar
    *!Rev!* and not *%%c* because *%%c* will return whatever the revision number is + the “.jar” file extension as illustrated above. So to circumvent this, we parse *%%c* so that the last 4 characters are stripped. Such is done like this:
    set RevN=%%c
    set RevN=!RevN:~0,-4!
    Hence, the usage of !Rev!.
    Below is a screenshot post of the ANT build.xml that goes with our script:
    <!--<?xml version="1.0"?>-->
    <!--BUILD.XML-->
    <project name="bpel.deploy" default="deployProcess" basedir=".">
         <!--
         This ant build file was generated by JDev to deploy the BPEL process.
         DONOT EDIT THIS JDEV GENERATED FILE. Any customization should be done
         in default target in user created pre-build.xml or post-build.xml
         -->
         <property name="process.dir" value="${basedir}" />
              <!-- Set BPEL process name -->
              <!--
              <xmlproperty file="${process.dir}/bpel/bpel.xml"/>
              <property name="process.name" value="${BPELSuitcase.BPELProcess(id)}"/>
              <property name="rev" value="${BPELSuitcase(rev)}"/>
              -->
         <property environment="env"/>
         <!-- Set bpel.home from developer prompt's environment variable BPEL_HOME -->
              <condition property="bpel.home" value="${env.BPEL_HOME}">
                   <available file="${env.BPEL_HOME}/utilities/ant-orabpel.xml" />
              </condition>
         <!-- show that both bpel and oracle.home are located (TESTING purposes ONLY) -->
         <!-- <echo>HERE:${env.BPEL_HOME} ${env.ORACLE_HOME}</echo> -->
         <!-- END TESTING -->
         <!--If bpel.home is not yet using env.BPEL_HOME, set it for JDev -->
         <property name="oracle.home" value="${env.ORACLE_HOME}" />
         <property name="bpel.home" value="${oracle.home}/bpel" />
         <!--First override from build.properties in process.dir, if available-->
         <property file="${process.dir}/build.properties"/>
         <!--import custom ant tasks for the BPEL PM-->
         <import file="${bpel.home}/utilities/ant-orabpel.xml" />
         <!--Use deployment related default properties-->
         <property file="${bpel.home}/utilities/ant-orabpel.properties" />
         <!-- *************************************************************************************** -->
         <target name="deployProcess">
              <tstamp>
                   <format property="timestamp" pattern="MM-dd-yyyy HH:mm:ss" />
              </tstamp>
              <!-- WRITE TO LOG FILE #tjas -->
              <record name="build_verbose.log" loglevel="verbose" append="true" />
              <record name="build_debug.log" loglevel="debug" append="true" />
              <echo></echo>
              <echo>####################################################################</echo>
              <echo>BPEL_AutoDeploy initiated @ ${timestamp}</echo>
              <echo>--------------------------------------------------------------------</echo>
              <echo>Deploying ${process.name} on ${http.hostname} port ${http.port} </echo>
              <echo>--------------------------------------------------------------------</echo>
              <deployProcess
                   user="${admin.user}"
                   password="${admin.password}"
                   domain="${domain}"
                   process="${process.name}"
                   rev="${rev}"
                   dir="${process.dir}/${path}"
                   hostname="${http.hostname}"
                   httpport="${http.port}"
                   verbose="${verbose}" />
              <sleep seconds="30" />
              <!--<echo message="${process.name} deployment logged to ${build_verbose.log}"/>
              <echo message="${process.name} deployment logged to ${build.log}"/> -->
         </target>
         <!-- *************************************************************************************** -->
    </project>
    SUMMARY OF ISSUE AT HAND:
    ~ Every bpel process w/ 1.0 revision deploys with no problems
    ~ At first I would get an invalid character error most likely due to the “!” preceding “Rev”, but then I decided to set rev=”false” in the build.xml file. That didn’t work quite well. In another attempt, I decided to leave the –Drev= attribute within the batch script blank. That still led to 1.0s going through. My next thought was deploying something other than a 1.0, such as 1.2 or 2.0 and that’s when I realized that if it wasn’t a 1.0, it refused to go through.
    QUESTIONS:
    1.     IS THERE A WAY TO HAVE ANT LOOK INTO THE BPEL PROCESS AND PULL THE REVISION ID?
    2.     WHAT ARE WE DOING WRONG? ARE WE MISSING ANYTHING?
    3.     DID WE GO TOO FAR? MEANING, IS THERE A MUCH EASIER WAY WE OVERLOOKED/FORGOT/OR DON’T KNOW ABOUT THAT EXISTS?
    Edited by: 793292 on Jul 28, 2011 12:38 PM

    Only thing i can think of is instead of using a MAC ACL , u cud jus use the default class
    Policy Map Test
    class class-default
    police 56000 8000 exceed-action drop
    Class Map match-any class-default (id 0)
    Match any
    You would be saving a MAC-ACL ;-).

  • Advice Needed for Audio and MIDI Controller Software

    I am hoping someone may be able to give me advice on how I should approach my problem.
    I am currently running a live show with audio backing tracks and a small 12 par lighting system. The light system can be controlled via standard MIDI. I am using iTunes to play back sets of backing tracks, and manually controlling the lighting system using a dedicated foot controller.
    What I would like to do, is be able to use software to simultaneously play audio files and perfectly sync the lighting (fades, shots, etc. via midi) with the audio track. Ideally, I would be able to have the audio/lighting paired as discreet entities that could be grouped into sets.
    i.e.
    Audio-MIDI_1 = Audiotrack1 would always be paired with Lightcontroller_MIDI_events1
    Audio-MIDI_2 = Audiotrack2 would always be paired with Lightcontroller_MIDI_events2
    Audio-MIDI_3 = Audiotrack3 would always be paired with Lightcontroller_MIDI_events3
    I could the create a set of Audio-MIDI_x tracks which could be triggered in any order.
    I would like to have to option to be able to activate a single track, or have a complete group of tracks activated sequentially (but be able to stop the and start the group as needed - you never know what will happen in a live situation). It would be nice to have a time-line UI as well.
    Now the final requirement: it should be able to run on a PISMO PowerBook. <cringe>
    I hope I am not too confusing.
    I am thinking MainStage would not be the software for this task as Leo is not an option for the Powerbook.
    I have looked at QLab and showcontrolpro, but I don't think theses are right for me either.
    Any help or suggestions would be greatly appreciated.
    TIA

    Gary,
    With a Pismo you really are confined. I've been using some software from Alien Apparatus called Solo Performer Show Controller (www.alienapparatus.com) and it has a timeline to sequence lights where you can run 32 channels of midi or dmx lighting. The only setback is it needs at least a 1 GHZ processor. There may be a company like Sonnet that has an upgrade for your processor.
    The software is able to run backing tracks, send midi controls to effects units or other software, send hotkeys for apps, synchronize lyrics in a window or on an external monitor, controls a dmx fog machine, and also has a six button usb foot controller which can be configured for many things such as scrolling your songlist to select the next song, change volume of tracks, change a light scene, send a hotkey, trigger samples, etc. It is a bit pricey at just under 600 dollars but the foot control is durable, the company has excellent support and they listen to users to add features to the software.
    Hope this helps
    Thor

  • Installation BPEL PM mid-tier flavor beta3 fails during deployment

    During the deployment of applications and adapters
    "E:\as10g101\jdk\bin\java -Dant.home=E:\as10g101
    (CONNECT_DATA=(SERVICE_NAME=oi10g))) -quiet -e -buildfile bpminstall.xml init-midtier
    Inserting OPMN fragment ...
    BUILD FAILED
    E:\as10g101\integration\bpelpm\runtime\install\ant-tasks\bpminstall.xml:321: The following error occurred while executing this line:
    E:\as10g101\integration\bpelpm\runtime\install\ant-tasks\init-midtier.xml:80: Java returned: 1
    Total time: 0 seconds"
    It dies
    At the same time I have an OC4J_BPEL container within my mid-tier but I cannot connect to my BPEL console and I am not certain what else I am missing
    Does anybody know how to circumvent this deployment failure?

    Frank I assume you have now succesfully installed the schemas in your 10g database and went on to install BPEL on the AS midtier. I assume you have the 10.1.2 production midtier (j2ee & webcache) installed?
    Please add the relevant part of the install log file.
    Sandor,
    Thanks for your prompt response.
    Yes I was able to install irca.zip file - indeed there was java command line example in there.
    And yes I have a mid-tier installed. It seems that a part was succesfull since I have an OC4J_BPEL container.
    Herby the logging from the Configuration Assistants,
    including the succesfull BUILD:
    Preprocessing configuration files ...
    E:\as10g101\jdk\bin\java -Dant.home=E:\as10g101\integration\bpelpm\orabpel -classpath E:\as10g101\integration\bpelpm\orabpel\lib\ant_1.6.2.jar;E:\as10g101\integration\bpelpm\orabpel\lib\ant-launcher_1.6.2.jar org.apache.tools.ant.Main -Dinstall.type=Midtier -Dob.home=E:\as10g101\integration\bpelpm\orabpel -Dbpm.home=E:\as10g101\integration\bpelpm\runtime -Djava.home=E:\as10g101\jdk -Doracle.home=E:\as10g101 -Dhost.name=dellxp32 -Dhttp.proxy.set=false -Dhttp.port=80 -quiet -e -buildfile bpminstall.xml install-sa patch-orabpel instantiate-orabpel-files
    BUILD SUCCESSFUL
    Total time: 14 seconds
    Warning: Unable to remove existing file E:\as10g101\integration\bpelpm\orabpel\lib\bpm-infra.jar
    Warning: Unable to remove existing file E:\as10g101\integration\bpelpm\orabpel\lib\olite40.jar
    Exit: 0
    TASK: oracle.tip.install.tasks.UpdateConfigFiles
    Updating configuration files ...
    TASK: oracle.tip.install.tasks.RegisterOlite
    Registering Olite ...
    No Olite registration required.
    TASK: oracle.tip.install.tasks.DeployApps
    Deploy applications and adapters
    E:\as10g101\jdk\bin\java -Dant.home=E:\as10g101\integration\bpelpm\orabpel -classpath E:\as10g101\integration\bpelpm\orabpel\lib\ant_1.6.2.jar;E:\as10g101\integration\bpelpm\orabpel\lib\ant-launcher_1.6.2.jar org.apache.tools.ant.Main -Dinstall.type=Midtier -Dob.home=E:\as10g101\integration\bpelpm\orabpel -Dbpm.home=E:\as10g101\integration\bpelpm\runtime -Djava.home=E:\as10g101\jdk -Doracle.home=E:\as10g101 -Dhost.name=dellxp32 -Dias.name=as10g101.dellxp32 -Dorabpel.password=orabpel -Ddb.connect.string=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST=dellxp32)(PORT=1521)))(LOAD_BALANCE=yes)(CONNECT_DATA=(SERVICE_NAME=oi10g))) -quiet -e -buildfile bpminstall.xml init-midtier
    Inserting OPMN fragment ...
    BUILD FAILED
    E:\as10g101\integration\bpelpm\runtime\install\ant-tasks\bpminstall.xml:321: The following error occurred while executing this line:
    E:\as10g101\integration\bpelpm\runtime\install\ant-tasks\init-midtier.xml:80: Java returned: 1
    Total time: 0 seconds
    Exit: 1
    Configuration assistant "Oracle BPEL Process Manager Configuration Assistant" was canceled
    In addition I would like to add the following:
    I am able to connect to the BPEL console (localhost:80/BPELConsole) but I cannot for example deploy the LoanFlow demo - I receive the following error in the command box:
    [ear] Building ear: E:\as10g101\integration\bpelpm\orabpel\system\appserve
    r\oc4j\j2ee\home\applications\StarLoanUI.ear
    deployStarLoanUIoc4j:
    [java] Error: Unable to find Java:comp/ServerAdministrator:
    The reason for that could be that I need to startup the BPEL server. But that would confuses me since the oc4j container is running and I can connect to the BPEL console. At the same time if I look in the startorabpel.bat the olite lines are still active and within data-sources.xml file the Oracle9i lines are inactive - although I have provided the Oracle10g schema during the installation. Hopefully you are still with me and are able to shed some light on this,
    Frank

  • Error Deploying BPEL process to Mid Tier

    I'm having problems deploying a jar to midtier.
    When I attempt to deploy the jar of my BPEL process to a 10.1.2.0 GA mid-tier BPEL instance using the console's "deploy new process" I get the following error message.
    The BPEL process works and deploys with no problems on a 10.1.2.0 GA Developer's install (same machine). I have tried deploying it both from JDev and the console after un-deploying it manually. Both of these work fine in the Developers install.
    Thanks,
    Craig
    Deploy New BPEL Process
    bpel_AddressChangeValidate_2.0.jar failed to deploy.
    Error deploying BPEL archive. An error occurred while attempting to deploy the BPEL archive file "bpel_AddressChangeValidate_2.0.jar"; the exception reported is: Problem 1: [putFTP.wsdl]: null

    Figured it out...
    The problem was I forgot to copy the FTP connect info
    into the midtier FtpAdapter oc4j-ra.xml file.
    Is there an out of the box way for low level config
    like this to be included in the JAR file?Nope not right now. We are working on what we can do to fix this going forwards in a later release.
    >
    One interesting note, I was able to "deploy" my
    process by copying the JAR file directly into the
    deployment folder... What is the difference between
    doing this and using "deploy new process" from the
    console?When going the console route, it goes through an additional deployment time validation check which you were able to bypass by copying the jar directly into the deployment folder.
    You may have been hitting a bug in the validation code, otherwise you should have really seen the error message asking you to potentially fix the jndi entry in oc4j-ra.xml.
    HTH
    Shashi
    >
    Regards,
    Craig

  • Where did the iFS Javadoc webpages go?

    The Javadoc webpages of iFS APIs seem to have vaporized early last week. Can someone please point to their new home? they were not easy to find in the first place, but now I am having no luck at all.
    Thanks,
    -Jeff

    I'm trying very hard to just create a new versioned document in IFS 9.0.2 via APIs. Unfortunately, I'm running into problems at many different points:
    If I don't specify a content string, I get:
    oracle.ifs.common.IfsException: IFS-30002: Unable to create new LibraryObject
    oracle.ifs.common.IfsException: IFS-31803: No Content specified in ContentObjectDefinition
    If I leave the file out of the DocDef, I get a different error:
    java.lang.NoSuchMethodError: long oracle.jdbc.dbaccess.DBAccess.lobWrite(oracle.sql.BLOB, long, byte[])
    Even if this were to work, embedded within the code, there is quirky behaviour coming from the order of establishing Documents, VersionsSeries, and Families. I know you guys are trying to maintain flexability, but the structures listed in the book have hidden sequential dependencies (difficult coupling issues). It's not well documented, making these APIs very difficult to use without a lot of insider knowledge. It's great that you all can agree as to where I should go for examples, but you have not clearly communicated to me where I can see these examples. This is the same as when I asked for javadocs. please be kinder to me, I can't read your minds or participate in your internal communication. Don't get me wrong, I appreciate all the help, but I am not making great progress on this side. With all of the quirkiness my company has experienced with iFS's Java APIs, we've decided to isolate and eliminate 80% of the extra methods provided. We are trying to wrap the remaining 20% into a highly reliable Interface & Implementation. The I/F is easy to define, but even just sifting through and using the ifs bean classes for implementation is proving difficult.
    com.rsaiia.common.pdm.Folder aDir = testFileRepository.getRootFolder()
    .getFolder("home")
    .getFolder("jeffr")
    .getFolder("SecondTest");
    com.rsaiia.common.pdm.myDoc = aDir.createDocument("BungBucket.unc",
    "UNCLASSIFIED_DOCUMENT",
    new File("C:\\docs\\VersionTest.txt"));
    public com.lmco.rsaiia.common.pdm.Document
    createDocument(String theDocName, String theContentTypeName, File theFolderPath)
    throws PDMException {
    com.lmco.rsaiia.common.pdm.Document theDoc;
    if (theDocName == null)
    throw new NullPointerException("No Document Name");
    if (theContentTypeName == null)
    throw new NullPointerException("No ContentType Name");
    if (theFolderPath == null)
    throw new NullPointerException("No File Name");
    String description = theDocName + " Description";
    // The content (file to be contained in document) is associated in the
    // createDocumentDefinition call
    try{
    DocumentDefinition def = createDocumentDefinition(theDocName, description,
    theFolderPath, theContentTypeName);
    def.setAddToFolderOption(myFolder);
    // the more general variant of createDocument does the rest
    theDoc = getDocument(createDocument(def));
    theDoc.addAttribute(m_SOURCE_FILE_LOCATION_ATTRIBUTE); // should already be there but just in case
    PublicObject aFileObject = myFolder.findPublicObjectByPath(theDocName);
    Family aFamily = (Family)aFileObject;
    myFileSystem.makeVersioned(aFileObject); // make all created files versioned
    VersionSeries aVSeries = aFileObject.getFamily().getPrimaryVersionSeries();
    VersionDescription aVersDesc = aVSeries.getLastVersionDescription();
    System.out.println("Created Document " + theDocName + " In " + theFolderPath);
    return theDoc;
    catch (Exception e){
    throw new PDMException(e);
         * create a DocumentDefinition.
         * @param docName          the name of the new document
         * @param classname          the name of the classobject for the new document
    * @param filePath          a local file system path to content for
         * this document
    * @param parent          the folder that will become the parent of the
         * new document
         * @return                    the created Document
         * @exception IfsException if operation fails.
    private DocumentDefinition createDocumentDefinition (String name,
              String description, File filePath, String contentType)
              throws IfsException {
    if ( name == null )
    throw new NullPointerException("Next time, offer a document name");
              DocumentDefinition def = new DocumentDefinition(getSession());
              def.setAttribute(oracle.ifs.beans.Document.NAME_ATTRIBUTE,
                   AttributeValue.newAttributeValue(name));
              def.setAttribute(oracle.ifs.beans.Document.DESCRIPTION_ATTRIBUTE,
                   AttributeValue.newAttributeValue(description));
              // Set the class only if it's specified
              ClassObject co = (contentType == null)
                   ? null : lookupClassObject(contentType);
              if (co != null)
                   def.setClassObject(co);
              // Set the content if specified
    if (filePath != null)
    applyContentSettings(def, filePath.toString());
              return def;
    Gets the file extension from the supplied file name and
                   uses this to infer the Format which is written to the supplied
    document definition object
    private void applyContentSettings(DocumentDefinition def, String filePath)
              throws IfsException
              if ((filePath != null) && (def != null))
                   String ext = null;
                   int pos = filePath.lastIndexOf(".");
                   if (pos > 0 && pos < filePath.length())
                        ext = filePath.substring(pos + 1);
                   if (ext == null)
                        // default to "txt"
                        ext = "txt";
                   // set the based on the extension from the filepath
                   Format fmt = lookupFormatByExtension(ext);
                   def.setFormat(fmt);
                   def.setContentPath(filePath);
    * Creates a new folder in the directory specified by the oParentFolder input parameter
    * @param Document a Oracle Document.
    * @return     PDMDocument
    * @throws IfsException if operation fails.
    private com.lmco.rsaiia.common.pdm.Document getDocument (oracle.ifs.beans.Document theDoc)
    throws PDMException {
    try{
    return new PDMDocument(theDoc,getSession(),getFileSystem());
    catch (Exception e){
    throw new PDMException(e);
    private oracle.ifs.beans.Document createDocument(DocumentDefinition def) // was public
    throws IfsException     {
    oracle.ifs.beans.Document theDoc =
    (oracle.ifs.beans.Document) getSession().createPublicObject(def);
              return theDoc;

  • Had to restart information store service. Caused by throttling?? Event ID 1010 from ManagedAvailability, any where else to look for error logs?

    For some reason we had to restart the information store service. So far I came up with these logs from the
    ManagedAvailability event logs. If there some where else to check for additional information?  It looks like the throttling allowed a failover?!
    DatabaseFailover-00cb58bd-0115-4478-b4aa-fe60dce2757a-DatabasePercentRPCRequestsDatabaseFailover: Throttling allowed the operation to execute
    Starting recovery action (Action=DatabaseFailover, Resource=00cb58bd-0115-4478-b4aa-fe60dce2757a, Requester=DatabasePercentRPCRequestsDatabaseFailover, ThrottlingMode=None)
    Recovery action failed. (Action=DatabaseFailover, Resource=00cb58bd-0115-4478-b4aa-fe60dce2757a, Requester=DatabasePercentRPCRequestsDatabaseFailover, Error=Microsoft.Exchange.Cluster.Replay.AmDbMoveOperationNotSupportedStandaloneException: An Active Manager
    operation failed. Error: The database action failed. Error: This operation is not supported on standalone mailbox servers.
       at Microsoft.Exchange.Cluster.ActiveManagerServer.AmDbOperation.Wait(TimeSpan timeout)
       at Microsoft.Exchange.Cluster.ActiveManagerServer.ActiveManagerCore.MoveDatabase(Guid mdbGuid, MountFlags mountFlags, UnmountFlags dismountFlags, DatabaseMountDialOverride mountDialOverride, AmServerName fromServer, AmServerName targetServer,
    Boolean tryOtherHealthyServers, AmBcsSkipFlags skipValidationChecks, AmDbActionCode actionCode, String moveComment, AmDatabaseMoveResult& databaseMoveResult)
       at Microsoft.Exchange.Cluster.ActiveManagerServer.AmRpcServer.<>c__DisplayClassc.<MoveDatabaseEx>b__b()
       at Microsoft.Exchange.Data.Storage.Cluster.HaRpcExceptionWrapperBase`2.RunRpcServerOperation(String databaseName, RpcServerOperation rpcOperation)
       --- End of stack trace on server (XMNWDEX001.Xamin.com) ---
       at Microsoft.Exchange.Data.Storage.Cluster.HaRpcExceptionWrapperBase`2.ClientRethrowIfFailed(String databaseName, String serverName, RpcErrorExceptionInfo errorInfo)
       at Microsoft.Exchange.Data.Storage.ActiveManager.AmRpcClientHelper.RunDatabaseRpcWithReferral(AmRpcOperationHint rpcOperationHint, IADDatabase database, String targetServer, InternalRpcOperation rpcOperation)
       at Microsoft.Exchange.Data.Storage.ActiveManager.AmRpcClientHelper.MoveDatabaseEx(IADDatabase database, Int32 flags, Int32 dismountFlags, Int32 mountDialOverride, String fromServer, String targetServer, Boolean tryOtherHealthyServers, Int32
    skipValidationChecks, AmDbActionCode actionCode, String moveComment, String& lastServerContacted, AmDatabaseMoveResult& moveResult)
       at Microsoft.Exchange.HA.ManagedAvailability.ManagedAvailabilityHelper.PerformDatabaseFailover(String componentName, String comment, Database database)
       at Microsoft.Exchange.Monitoring.ActiveMonitoring.Responders.DatabaseFailoverResponder.InitiateDatabaseFailover(String databaseGuidString)
       at Microsoft.Exchange.Monitoring.ActiveMonitoring.Common.ThrottledRecoveryAction.Execute(Boolean throwOnException, TimeSpan timeout, Action`1 action)
       at Microsoft.Office.Datacenter.ActiveMonitoring.ResponderWorkItem.InvokeResponder(CancellationToken cancellationToken)
       at Microsoft.Office.Datacenter.ActiveMonitoring.ResponderWorkItem.CheckCorrelationAndInvokeResponder(MonitorResult lastMonitorResult, CancellationToken cancellationToken)
       at Microsoft.Office.Datacenter.ActiveMonitoring.ResponderWorkItem.<>c__DisplayClassb.<>c__DisplayClassf.<DoManagedAvailabilityWork>b__9(ResponderResult lastResponderResult)

    Hi,
    Based on this event log, I find the error “Error=Microsoft.Exchange.Cluster.Replay.AmDbMoveOperationNotSupportedStandaloneException” and error “Error: This operation is not supported on standalone mailbox servers.”
    This error seems that there is unsupported configuration in your Exchange deployment.
    I will be very glad to help if you explain your Exchange server deployment for more details.
    Thanks.

  • Change deployment package for software updates

    Hi there
    Currently we have different deployment packages and software update groups based on year, product, etc.
    In the near future i'd like to rearange our software deployment process in configuration manager 2012:
    1x Deployment Package for all updates
    1x "Full" Software Update Groups with all updates in it. Additional to that we'll create a "Diff" Software Update Group at the Patchday and merge the updates later via edit membership in the "Full" Software Update Group.
    Are the following steps which i would perform correct?
    1. Select all updates which are deployed and not expired and not superseeded
    2. Create a new Software Update Group "Full"
    3. Select the new Software Update Group --> Download --> Create my new deployment package
    4. Deploy the new software update group to my collections
    5. Delete the obsolete software update groups / deployment package
    6. Delete the old updates source folders on our filer.
    I current don't know if the redownload process let the updates "forget" the old deployment package. With the above described steps i should get rid of all expired and superseeded updates.
    Thanks for any advice :-)
    Regards,
    Simon

    That is correct they only receive and install the updates they require, but don't confuse the metadata and deployments with the updates themselves. They will still receive the metadata and the policies for every update you deploy to them and that's where
    the problem lies. Each and every update assigned to a client (using a deployment) has it's own policy which of course must be downloaded by the client and stored in WMI causing the bloat. Note that I haven't experienced this first-hand but am relying on the
    accounts of others here in the forums but to me, if there is any chance of this being an issue, I would avoid it.
    For Update Groups, just a few categories is sufficient to break things up and I typically do three: workstations, server, office. These are often on different patching schedules anyway so it makes sense to have three separate ADRs for them anyway.
    For packages, I typically organize based on the calendar creating a new package every 3-6 months with the package containing all updates. There's really no need to divide the package up by product unless you have DPs dedicated to a specific product. Note
    that pre-R2, to change the package an ADR referenced you had to use PowerShell -- it's been added into the GUI in R2 though.
    Jason | http://blog.configmgrftw.com

  • How do I keep my Patch names for my external Midi instruments when starting

    Whenever I start a new project all the names in the different banks are just numbers.
    I use a JV-2080 with expansion cards which represent close to 2000 patch names in 11 banks.
    (I have some other Midi instruments too with their banks)
    I have a "basic setup" project where all the patches for all the instruments are loaded, and until I know a better way, I always start with that "basic setup" and then modify it and "save as" a new project.
    Is that the only way? or is there a way to automatically load all the patch names in the Multi Instrument window?
    I find it very annoying that I cannot start a project from scratch, without having to copy and paste hundreds of patch names.
    I read somewhere in the manual that "initializing" the banks in the Multi Instrument Window, occupies extra memory. I find this comment very strange because the text file containing all my patch names is only 24kB just a drop in the bucket.
    Or am i missing something?

    The whole point of autoload temapltes is you setup a Logic song to contain all the stuff you need, environment configuration, patch names, track assignments and so on, so whenever you load Logic or start a new song, all your stuff is instantly there.
    Think of it like a real studio - when you start a new song, you don't unpack all your gear from boxes, put them on shelves, and wire them up etc before you start - that stuff is all done once beforehand. You just turn them on, and go.
    Same with Logic - it loads your startup template with all your stuff in it, so you can immediately just start doing music, with no futzing necessary.

  • Dev - UAT - Prod env migration

    Hi Experts,
    Apps 7.9.6 OM module is already implemented on the Sun Solaris sandbox . Could you please help me how can I migrate the existing OM module to the UAT & Prod env. I would appreciate it if you can tell what effort is required to do this.
    Thanks,
    Kris.

    Hi everyone
    We finished implementing BI Apps for HR, Finance, Supply Chain, Procurement and Service modules. I am also interested in the best practice of migrating this to a UAT and subsequently Prod environments.
    Any thoughts?

Maybe you are looking for

  • N97 Yet more problems

    Recieved a message the other day saying there was an important update to download via the pc software updater. Installed the new version of software updater, plugged my phone in via usb in pc suite mode and surprise surprise it doesnt find the phone.

  • 10.1.0.5 Patch Set Release Date?

    Hello, Does anyone know the projected release date of the 10.1.0.5 patch set for Oracle Database Server? I'm interested in Solaris SPARC patch set. Thanks, Vlad

  • Using Trailer text "Titles" in other imovie projects?

    The various trailers in the new imovie 11 are great, but I can't seem to utilize the text / titles specific to those trailers on any of my other projects? Is there a trick to pulling it off? Same thing with the music in the trailers, can't seem to ap

  • Embed a PDF Viewer in a webpage (FlashPaper?)

    Hello, I am a creative cloud member, primarily using Dreamweaver, Photoshop at the moment.  My client would like to have a embedded PDF viewer similar to the on on this link: http://www.blurb.com/books/5607793-zunior-eats-hard-cover (I've seen info a

  • How To Pass Data Between A Main Report and a Subreport

    Hello,   I'm working with Crystal Reports Professional XI.    I have a main report with two date parameters: BeginDate of Date type and EndDate of Data type.  The main report has no details being printed instead it's grouped by a formula field and di