Copying application system within a repository

We have a scenario whereby a certain application system needs to be copied. The logic behind this is to have an extra version of the application system, after which the source version will be frozen. The newly-made copy will be used as a new work-copy. We do not want to use version control.
When copying application system 1 (AS1) to AS2, it appears that certain references are going crooked. The copying itself was effected by right mouse clicking on highlighting AS1. (Should 'external references' have been checked before the actual copying action?)
After actually having copied the application system, the following happens: Relationships can now be found cross-application system. For instance a business function in AS2 can refer to another one in AS1. This is not the desired effect. We would like for AS2 to act independent of AS1.
Is this possible? Does anybody have any experience in this kind of matter? Designer version 9.0.4.5.6 is used. Any feedback would be more than welcome.
Kind regards from the Netherlands.

Hi Brett,
You can export the application system present in the 6i repository into a dump file using Export utility available under Utilities--> Export menu option in Repository Object Navigator and then import it into the 9i repository using the import utility available under Utilities--> Import menu option again in Repository Object Navigator.
If there are User Extensions in the 6i repository, then you will need to extract these User Extensions before exporting the application system and load these user extensions into the 9i repository before importing the dump into the repository.
Extracting and Loading of user extensions can be performed using the Extract User Extensions and Load User Extensions utilities available under User Extensions in Repository Administrative Utilit

Similar Messages

  • Copy application from within an application?

    Ok, this may be a bit of a stretch, but is it possible to create a new application as a copy of an existing one from within an application?
    To explain: I'm envisioning an APEX framework where I have a "core" application which has all of my standard functionality built in; whenever I want to build a new application, I copy that, rather than building a new one from scratch. Information about the applications (active status, authentication method, created by, last modified by, description, etc.) would be stored in a central (custom) table (this much has been covered by Scott Spendolini in this presentation).
    So, I'd like to build an application which would show me the data from this central table, and also have a wizard that would a) copy the core application to a new ID, potentially setting some features in the process, and b) create the appropriate record in the central table.
    Thanks,
    -David

    David,
    I am having the same thoughts and will watch this thread. I believe what you are referring to is "subscribing to a master application" of which I have no experience in.
    Just saw another thread similar to this one:
    See {thread:id=1774693}
    Jeff
    Edited by: jwellsnh on Nov 22, 2010 1:58 PM

  • Grouping collected versions of Designer 6 application systems in 10g

    Hello,
    I wasn't sure how to make the Subject clear, hopefully I'll explain my problem anyway.
    I'm testing the migration of application systems from Designer 6 to Designer 10g. Some of the v6 application systems are versioned in the sense that we created new versions and froze the current one, i.e in the RON, select Application > New Version in v6. So, we may have several 'versions' of an application system called TEST, i.e.
    TEST (1)
    TEST (2)
    TEST (3)
    ... and so on to TEST (X). X is live, the rest are frozen.
    In the 10g RON, each version (all frozen and live) of TEST appears as a separate Container in its own right. Imagine we have this situation with other named application systems we've migrated - we're dealing with a big list in the RON.
    What I want to do is group all my TEST containers under a 'TEST' top level container to make navigating a bit easier (so I can expand the top level only if I want to see all the versions underneath). I did this and used the option 'Copy Application System' to copy TEST (1) and TEST (2) into this TEST top level. Unfortunately the audit info for TEST (1) and TEST (2) under the new TEST container has been lost - they now have today's date. As they're copies, I still have TEST (1) and TEST (2) as explicit Containers in their own right, but I can't see how to group them as I'd like without losing this audit info.
    So - is this possible? If so, what am I doing wrong?
    Hope I've explained myself. I should add we're not switching versioning on in 10g yet, so we've the GLOBAL_SHARED_WORKAREA for the moment.
    Thanks, Antony

    Michael,
    Thanks again. We need to bring older versions of the frozen application systems, e.g. TEST(4), as we have interdependencies with other application systems (in terms of shared objects). Otherwise we would have only brought the latest versions over to simplify things.
    When you say at the end of your reponse to 'start the SCM's method of versioning', you're talking about switching the versioning on in 10g, right? We were going to migrate our v6 content and use the GLOBAL_SHARED_WORKAREA for the moment, and enabling versioning and all that entails would be the next phase.
    Antony

  • How to synchronize two applications system

    HI,
    I work witj Designer 9i, in a Non-versioned Repository. I create a copy of an application system on the same repository.
    I would like to know if there is some mechanism that will synchronize the two application system copies. What i want is how to automate the replication of any modification I made to the copy, in the original application.
    Thanks for your help,
    Malika

    Hi Roel,
    I want to do this because in our organization the repository is non versioned, and we want to make copies for the application system in purpose to sure that only Data modeler and DBA will access it. The developers can't update it. In the original application system we will replicate the changes and then developers go on with their Treatment modules.
    Thanks for your reply ,
    Malika

  • Application System

    After I created a repository in Oracle Designer R6.0 Front Panel 6.0.3.1.0 in Windows NT Version 4,0.When I create an Application system, the error meassage ORA-01403 no data found. If I go through Repository Object Navigator also the
    same error meassage came but when I 2nd time click the save button the error code was
    ORA-06502 PL/SQL numeric or value error.

    Shan
    If you are using a ORACLE8 server then your large_pool_size of the SGA is probably to small. I've got the same problem. I raised the size of the large_pool within the SGA to about 15Mb and than the problem was gone.
    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by shan anavaratha ([email protected]):
    After I created a repository in Oracle Designer R6.0 Front Panel 6.0.3.1.0 in Windows NT Version 4,0.When I create an Application system, the error meassage ORA-01403 no data found. If I go through Repository Object Navigator also the
    same error meassage came but when I 2nd time click the save button the error code was
    ORA-06502 PL/SQL numeric or value error.<HR></BLOCKQUOTE>
    null

  • How do I install OSX onto a new SSD (in the place of my optical drive) without transferring all data across.  However, with the applications, system and library on the SSD to improve the speed, but keep non essential items (the home folder) on the HDD

    I have a mid 2009 13 inch unibody 2.53GHz MacBook Pro.  I'm finding that it doesn't run as quickly as it used to. 
    A genius in the Apple store suggested that I replace my optical drive with an SSD, however only use the SSD for OSX, applications, system and library.  Keep all documents, pictures, music etc on the current hard drive. 
    I would be grateful if someone could help me with:
    1) installing OSX on the SSD without copying across data from the current hard drive
    2) transferring applications, system and library folders across to the SSD so that they still function
    3) changing my settings so that OSX reads the home folder from the current hard drive, as well as all the applications' data (documents, music etc...)
    However, I would like to run iMovie, with all events etc solely from the SSD to speed up the process of editing movies.
    If anyone could help with this, it would be much appreciated.

    If you got the data transfer cable with your SSD, the procedure should be pretty simple - and there should be step-by-step instructions in the box. You're simply going to remove the bottom case of your computer (using a Phillips #00 screwdriver), take out the two screws in the bracket holding the hard drive into place (using same screwdriver), remove the drive and (use a Torx 6 screwdriver) remove the four screws that hold the hard drive in place. Then put in the SSD and reassemble the machine.
    Then you'll plug up the old hard drive by using the SATA to USB cable and use the option key to boot from the old drive. I don't know what data transfer software Crucial provides, but I would recommend formatting the SSD  using Disk Utility from your old drive ("Mac OS Extended (Journaled)" with a single GUID partition) and then use Carbon Copy Cloner to clone your old drive to your new SSD (see this user tip for cloning - https://discussions.apple.com/docs/DOC-4122). You needn't worry about getting an enclosure since you have the data transfer cable and you don't want to use your old hard drive.
    There are a number of videos on YouTube that take you step-by-step through this procedure - many specific to Crucial SSDs and their data transfer kit - do a little searching there if you're unsure of how to procede.
    Clinton

  • Error copying application.xml icons: .../bin-release/assets/icons' does not exist

    Hi,
    Whenever I try to export the release build I get error from COmpiler during the process that "Error copying application.xml icons: Resource '/Project_Name/bin-release/assets/icons' does not exist."  I have specified 4 icons in the -app.xml file of sizes 16,32,48,128 which exist in the path specified and files are not corrupt. I have checked and unchecked the compiler directive "Copy non-embedded resources in the Output file" but that has not helped too.
    Can somebody please advise what do I do in this situation?
    We are using
    FB4 with Flex Hero(4.5) SDK and java heap space specified in .ini files is 1224m
    This is somewhat urgent guys...
    Thanks
    Shubhra

    Frank,
    Thank you for your answer. I had not checked (will do so tonight).
    I had considered it adequate to wipe out the entire system directory (thereby wiping out the integrated weblogic server), but perhaps it was not adequate? I did not specify that this is on the integrated server, but that is the case....
    Stuart

  • How to use bsp application SYSTEM for session handling.

    Hi All,
    We are implementing OCI.We have a few BSP applications that are called by standard ITS application.I need to destroy session at server side when the browser is closed for that..
    I copied the pages session_default_frame.htm and session_single_frame.htm from bsp application SYSTEM into my application and made necessary changes.
    I need to pass one url 'HOOK_URL' (this is related to OCI) from starting page of application to final page.
    Now suppose earlier there were two pages in my application page1.htm and page2.htm , so i was able to pass the HOOK_URL from page1 to page2 but after adding the two pages from SYSTEM application , i can pass the HOOK_URL from session_single_frame.htm  to page1.htm
    Page session_single_frame.htm:
    Page attributes:
    hook_url     TYPE     STRING (AUTO)
    OnRequest:
    navigation->set_parameter( hook_url ).
    but cant pass it from page1 to page2...what additional code is required?
    page page1.htm:
    Page attributes:
    hook_url     TYPE     STRING (AUTO)
    onRequest:
    navigation->set_parameter( 'HOOK_URL' ).
    the above code was working fine until i added the two new pages to my application.
    Hope i was able to explain the issue properly.
    Thanks,
    Anubhav.

    Hi,
    Let me describe the steps i have taken oncw again:
    1)Copy page session_single_frame and session_default_frame from SYSTEM application and changed the name in
    DATA: target_page               TYPE STRING VALUE 'session_test.htm'.
    to
    DATA: target_page               TYPE STRING VALUE 'mypage1.htm'
    2)Addes a page attribute HOOK_URL of type string (AUTO) to session_single_frame.htm .
    3)Added the line
    <i n p u t  t y p e="hidden" na m e ="HOOK_URL"  v a l ue = "< % =  hook_url %>">
    to page1.htm so that hook_url is passd to page2.htm (page2.htm has a page attribute HOOK_URL of type string and auto).
    The hook_url in page2.htm looks like:
    "http://sapupd.mycompanyname.com:8002/sap(cz1TSUQlM2FBTk9OJTNhc2FwdXBkX1NSTV8wMiUzYXJUaHBOdE1VZDdhWkVTa3hYZGtPTXRxY1NBTWo3VlAwN3NWQ2c2REYtQVRU)/bc/gui/sap/its/bbpsc02/?~OkCode=ADDI&~Target=_top&~Caller=CTLG&~sap-syscmd=NOCOOKIE&~client=200&~language=EN&~HTTP_CONTENT_CHARSET=utf-8";
    The problem is that after the page is submited , a blank page comes up .
    On closing this blank page the "Endig user session" window comes.
    Please help
    Thanks,
    Anubhav.
    Edited by: Anubhav Jain on Oct 21, 2008 6:49 AM

  • SPROXY not working in application system

    Hello all,
    I have been trying to make SPROXY work in my QAS ERP 2004 SPS23 system
    for a few days now but I can not make it. I have
    configured SPROXY in the past for my sandbox and DEV system with
    success but I do not seem to have any luck in QAS.
    My PI landscape consists of sandbox, developement, qa and production
    systems. The version is PI 7.0 SPS21
    My ERP landscape consists of the same 4 systems but in version 6.40 -
    ERP 2004 SPS23
    All non production systems use a central SLD that runs in the PI development
    box (PID)
    The error message I get is as follows:
    When I run SPROXY in ERP QAS client 330, I get : "No connection to Integration
    Builder (only local data visible)"
    When I do Goto --> Integration Builder, I get the message "Integration
    Builder address not maintained"
    However, when I do Goto --> Connection Test, from the 4 reports I have
    to run, two work and two do not work:
    =>Check/maintain with report SPROX_CHECK_IFR_ADDRESS works and shows the correct address for Integration repository
    =>Check with report SPROX_CHECK_HTTP_COMMUNICATION works
    The other two reports
    =>Check with report SPROX_CHECK_IFR_RESPONSE
    ==>Check with report SPROX_CHECK_IFR_CONNECTION
    return: Integration builder data not undestood.
    SLDCHECK runs fine: All green and invokes correct IE session with the correct SLD URL (central SLD)
    SLDAPICUST shows the correct SLD (central)
    SXMB_ADM --> Integration Engine Configuration shows the destination RFC
    SAPIS_PIQ
    which is defined in SM59 as type H and works fine
    TCP RFCs LCRSAPRFC and SAPSLDAPI work
    The corresponding JCos in Visual Administration look ok
    I have registered the queues in SXMB_ADM and they registered ok .
    The /sap/xi/engine service is active in both the application system and the PI  system (QAs and PIQ)
    Finally the exchange profile settings in PIQ are fine as well
    com.sap.aii.connect.repository.name : piqas.finance.local
    com.sap.aii.connect.repository.httpport : 50000
    com.sap.aii.connect.repository.contextRoot :rep
    com.sap.aii.connect.integrationbuilder.startpage.url : rep/start/index.jsp
    com.sap.aii.applicationsystem.serviceuser.name: PIAPPLUSER
    and the password is set.
    PIAPPLUSER is service user and is not locked.
    I have tested SLDCHECK in PIQ and SLDAPICUST and they look ok as well.
    Any other clues? Any way I can investigate further via checking logs
    and traces to find out what is wrong?
    The "Data not understood" message is not helping me much in determining
    what is wrong
    Many thanks
    Andreas

    Hi Andreas,
    Please go through the below blog which explains you step by step in proxy connectivity.
    /people/vijaya.kumari2/blog/2006/01/26/how-do-you-activate-abap-proxies
    Regards,
    Naveen.

  • RME-00011 error encountered while creating a new application system

    Hello,
    I have just started with Oracle Designer.I have installed it.
    I wish to run the Repository reports and ER diagrammer from an
    existing database.So I created a repository.When I try to create
    an application system , it is giving me an error as:
    RME-00011:"Operation 'close' on ACTIVITY has failed"
    And as I can not create one , i can n ot do any work with it.
    What is going wrong here? Your help is greatly appreciated.
    -ksg

    Hi
    first you must to give execution grant for sys.dbms_lock and sys.dbms_pipe to repos owner. And MAX_ENABLES_ROLES must be in multiples of 10 and should execeed 20 in your
    INITxxx.ORA file.
    But I must to say that, I have this problem with RME-02124 RME-o2105 and ORA-01403 problems. And adviced solution that is mentioned above have not been solving my problem.. :(
    But it may be help you...
    take it easy..

  • Error when trying to create an Application System

    Hi everybody,
    I've installed on my computer Personal Oracle 8i and Designer 6.0. For the installation, I used valuable informations from someone called Mark (he's on this list - thank you very much) but I still have a problem: when I try to create a new Application System, I got the following error messages:
    RME-00011: Operation 'open' on ACTIVITY has failed
    RME-00011: Operation 'INS' on ci_application_systems has failed
    Does anybody know what is causing these errors? I searched through help but the proposed solutions are very generic. They just say "Try to find more specific errors".
    Can someone help me, please?
    Thanks in advance,
    Renato A. Veneroso

    Go here. I found a fix that works for my problem
    http://www.prenhall.com/mcfadden/oracle/repository/problems.html

  • Windows Server 2012 - Hyper-V - Cluster Sharded Storage - VHDX unexpectedly gets copied to System Volume Information by "System", Virtual Machines stops respondig

    We have a problem with one of our deployments of Windows Server 2012 Hyper-V with a 2 node cluster connected to a iSCSI SAN.
    Our setup:
    Hosts - Both run Windows Server 2012 Standard and are clustered.
    HP ProLiant G7, 24 GB RAM. This is the primary host and normaly all VMs run on this host.
    HP ProLiant G5, 20 GB RAM. This is the secondary host that and is intended to be used in case of failure of the primary host.
    We have no antivirus on the hosts and the scheduled ShadowCopy (previous version of files) is switched off.
    iSCSI SAN:
    QNAP NAS TS-869 Pro, 8 INTEL SSDSA2CW160G3 160 GB i a RAID 5 with a Host Spare. 2 Teamed NIC.
    Switch:
    DLINK DGS-1210-16 - Both the network cards of the Hosts that are dedicated to the Storage and the Storage itself are connected to the same switch and nothing else is connected to this switch.
    Virtual Machines:
    3 Windows Server 2012 Standard - 1 DC, 1 FileServer, 1 Application Server.
    1 Windows Server 2008 Standard Exchange Server.
    All VMs are using dynamic disks (as recommended by Microsoft).
    Updates
    We have applied the most resent updates to the Hosts, VMs and iSCSI SAN about 3 weeks ago with no change in our problem and we continually update the setup.
    Normal operation:
    Normally this setup works just fine and we see no real difference in speed in startup, file copy and processing speed in LoB applications of this setup compared to a single host with two 10000 RPM Disks. Normal network speed is 10-200 Mbit, but occasionally
    we see speeds up to 400 Mbit/s of combined read/write for instance during file repair.
    Our Problem:
    Our problem is that for some reason a random VHDX gets copied to System Volume Information by "System" of the Clusterd Shared Storage (i.e. C:\ClusterStorage\Volume1\System Volume Information).
    All VMs stops responding or responds very slowly during this copy process and you can for instance not send CTRL-ALT-DEL to a VM in the Hyper-V console, or for instance start task manager when already logged in.
    This happens at random and not every day and different VHDX files from different VMs gets copied each time. Some time it happens during daytime wich causes a lot of problems, especially when a 200 GB file gets copied (which take a lot of time).
    What it is not:
    We thought that this was connected to the backup, but the backup had finished 3 hours before the last time this happended and the backup never uses any of the files in System Volume Information so it is not the backup.
    An observation:
    When this happend today I switched on ShadowCopy (previous files) and set it to only to use 320 MB of storage and then the Copy Process stopped and the virtual Machines started responding again. This could be unrelated since there is no way to see
    how much of the VHDX that is left to be copied, so it might have been finished at the same time as I enabled  ShadowCopy (previos files).
    Our question:
    Why is a VHDX copied to System Volume Information when scheduled ShadowCopy (previous version of files) is switched off? As far as I know, nothing should be copied to this folder when this functionis switched off?
    List of VSS Writers:
    vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool
    (C) Copyright 2001-2012 Microsoft Corp.
    Writer name: 'Task Scheduler Writer'
       Writer Id: {d61d61c8-d73a-4eee-8cdd-f6f9786b7124}
       Writer Instance Id: {1bddd48e-5052-49db-9b07-b96f96727e6b}
       State: [1] Stable
       Last error: No error
    Writer name: 'VSS Metadata Store Writer'
       Writer Id: {75dfb225-e2e4-4d39-9ac9-ffaff65ddf06}
       Writer Instance Id: {088e7a7d-09a8-4cc6-a609-ad90e75ddc93}
       State: [1] Stable
       Last error: No error
    Writer name: 'Performance Counters Writer'
       Writer Id: {0bada1de-01a9-4625-8278-69e735f39dd2}
       Writer Instance Id: {f0086dda-9efc-47c5-8eb6-a944c3d09381}
       State: [1] Stable
       Last error: No error
    Writer name: 'System Writer'
       Writer Id: {e8132975-6f93-4464-a53e-1050253ae220}
       Writer Instance Id: {7848396d-00b1-47cd-8ba9-769b7ce402d2}
       State: [1] Stable
       Last error: No error
    Writer name: 'Microsoft Hyper-V VSS Writer'
       Writer Id: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de}
       Writer Instance Id: {8b6c534a-18dd-4fff-b14e-1d4aebd1db74}
       State: [5] Waiting for completion
       Last error: No error
    Writer name: 'Cluster Shared Volume VSS Writer'
       Writer Id: {1072ae1c-e5a7-4ea1-9e4a-6f7964656570}
       Writer Instance Id: {d46c6a69-8b4a-4307-afcf-ca3611c7f680}
       State: [1] Stable
       Last error: No error
    Writer name: 'ASR Writer'
       Writer Id: {be000cbe-11fe-4426-9c58-531aa6355fc4}
       Writer Instance Id: {fc530484-71db-48c3-af5f-ef398070373e}
       State: [1] Stable
       Last error: No error
    Writer name: 'WMI Writer'
       Writer Id: {a6ad56c2-b509-4e6c-bb19-49d8f43532f0}
       Writer Instance Id: {3792e26e-c0d0-4901-b799-2e8d9ffe2085}
       State: [1] Stable
       Last error: No error
    Writer name: 'Registry Writer'
       Writer Id: {afbab4a2-367d-4d15-a586-71dbb18f8485}
       Writer Instance Id: {6ea65f92-e3fd-4a23-9e5f-b23de43bc756}
       State: [1] Stable
       Last error: No error
    Writer name: 'BITS Writer'
       Writer Id: {4969d978-be47-48b0-b100-f328f07ac1e0}
       Writer Instance Id: {71dc7876-2089-472c-8fed-4b8862037528}
       State: [1] Stable
       Last error: No error
    Writer name: 'Shadow Copy Optimization Writer'
       Writer Id: {4dc3bdd4-ab48-4d07-adb0-3bee2926fd7f}
       Writer Instance Id: {cb0c7fd8-1f5c-41bb-b2cc-82fabbdc466e}
       State: [1] Stable
       Last error: No error
    Writer name: 'Cluster Database'
       Writer Id: {41e12264-35d8-479b-8e5c-9b23d1dad37e}
       Writer Instance Id: {23320f7e-f165-409d-8456-5d7d8fbaefed}
       State: [1] Stable
       Last error: No error
    Writer name: 'COM+ REGDB Writer'
       Writer Id: {542da469-d3e1-473c-9f4f-7847f01fc64f}
       Writer Instance Id: {f23d0208-e569-48b0-ad30-1addb1a044af}
       State: [1] Stable
       Last error: No error
    Please note:
    Please only answer our question and do not offer any general optimization tips that do not directly adress the issue! We want the problem to go away, not to finish a bit faster!

    Hallo Lawrence!
    Thankyou for youre reply, some comments to help you and others who read this thread:
    First of all, we use Windows Server 2012 and the VHDX as I wrote in the headline and in the text in my post. We have not had this problem in similar setups with Windows Server 2008 R2, so the problem seem to be introduced in Windows Server 2012.
    These posts that you refer to seem to be outdated and/or do not apply to our configuration:
    The post about Dynamic Disks:
    http://technet.microsoft.com/en-us/library/ee941151(v=WS.10).aspx is only a recommendation for Windows Server 2008 R2 and the VHD format. Dynamic VHDX is indeed recommended by Microsoft when using Windows Server 2012 (please look in the optimization guide
    for Windows Server 2012).
    Infact, if we use fixed VHDX then we would have a bigger problem since fixed VHDX are generaly larger then Dynamic Disks, i.e. more data would be copied and that would take longer time = the VMs would be unresponsive for a longer time.
    The post "What's the deal with the System Volume Information folder"
    http://blogs.msdn.com/b/oldnewthing/archive/2003/11/20/55764.aspx is for Windows XP / Windows Server 2003 and some things has changed since then. for instance In Windows Server 2012, Shadow Copies cannot be controlled by going to Control panel -> System.
    Instead you right-click on a Drive (i.e. a Volume, for instance the C drive/Volume) in Computer and then click "Configure Shadow Copies".
    Windows Server 2008 R2 Backup problem
    http://social.technet.microsoft.com/Forums/en/windowsbackup/thread/0fc53adb-477d-425b-8c99-ad006e132336 - This post is about the Antivirus software trying to scan files used during backup that exists in the System Volume Information folder and we do not
    have any antivirus software installed on our hosts as I stated in my post.
    Comment that might help us:
    So according to “System Volume Information” definition, the operation you mentioned is Volume Shadow Copy. Check event viewer to find Volume Shadow Copy related event logs and post them.
    Why?
    Furhter investigation suggests that a volume shadow copy is somehow created even though the Schedule for Shadows Copies is turned off for all drives. This happens at random and we have not found any pattern. Yesterday this operation took almost all available
    disk space (over 200 GB), but all the disk space was released when I turned on scheduled Shadow Copies for the CSV.
    I therefore draw these conclusions:
    The CSV Volume has about 600 GB of disk space and since Volume Shadows Copy used 200 GB, or about 33% of the disk space, and the default limit is 10% then I conclude that for some reason the unscheduled Volume Shadow Copy did not have any limit (or ignored
    the limit).
    When I turned on the Schedule I also change the limit to the minimum amount which is 320 MB and this is probably what released the disk space. That is, the unscheduled Volume Shadow Copy operation was aborted and it adhered to the limit and deleted the
    Volume Shadow Copy it had taken.
    I have also set the limit for Volume Shadow Copies for all other volumes to 320 MB by using the "Configure Shadow Copies" Window that you open by right clicking on a drive (volume) in Computer and then selecting "Configure Shadow Copies...".
    It is important to note that setting a limit for Shadow Copy Storage, and disabaling the Schedule are two different things! It is possible to have unlimited storage for Shadow Copies when the Schedule is disabled, however I do not know if this was the case
    Before I enabled Shadow Copies on the CSV since I did not look for this.
    I now have defined a limit for Shadow Copy Storage to 320 MB on all drives and then no VHDX should be copied to System Volume Information since they are all larger than 320 MB.
    Does this sound about right or am I drawing the wrong conclusions?
    Limits for Shadow Copies:
    Below we list the limits for our two hosts:
    "Primary Host":
    C:\>vssadmin list shadowstorage
    vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool
    (C) Copyright 2001-2012 Microsoft Corp.
    Shadow Copy Storage association
       For volume: (\\?\Volume{e3ad7feb-178b-11e2-93e8-806e6f6e6963}\)\\?\Volume{e3ad7feb-178b-11e2-93e8-806e6f6e6963}\
       Shadow Copy Storage volume: (\\?\Volume{e3ad7feb-178b-11e2-93e8-806e6f6e6963}\)\\?\Volume{e3ad7feb-178b-11e2-93e8-806e6f6e6963}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 320 MB (91%)
    Shadow Copy Storage association
       For volume: (E:)\\?\Volume{dc0a177b-ab03-44c2-8ff6-499b29c3d5cc}\
       Shadow Copy Storage volume: (E:)\\?\Volume{dc0a177b-ab03-44c2-8ff6-499b29c3d5cc}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 320 MB (0%)
    Shadow Copy Storage association
       For volume: (G:)\\?\Volume{f58dc334-17be-11e2-93ee-9c8e991b7c20}\
       Shadow Copy Storage volume: (G:)\\?\Volume{f58dc334-17be-11e2-93ee-9c8e991b7c20}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 320 MB (3%)
    Shadow Copy Storage association
       For volume: (C:)\\?\Volume{e3ad7fec-178b-11e2-93e8-806e6f6e6963}\
       Shadow Copy Storage volume: (C:)\\?\Volume{e3ad7fec-178b-11e2-93e8-806e6f6e6963}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 320 MB (0%)
    C:\>cd \ClusterStorage\Volume1
    Secondary host:
    C:\>vssadmin list shadowstorage
    vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool
    (C) Copyright 2001-2012 Microsoft Corp.
    Shadow Copy Storage association
       For volume: (\\?\Volume{b2951138-f01e-11e1-93e8-806e6f6e6963}\)\\?\Volume{b2951138-f01e-11e1-93e8-806e6f6e6963}\
       Shadow Copy Storage volume: (\\?\Volume{b2951138-f01e-11e1-93e8-806e6f6e6963}\)\\?\Volume{b2951138-f01e-11e1-93e8-806e6f6e6963}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 35,0 MB (10%)
    Shadow Copy Storage association
       For volume: (D:)\\?\Volume{5228437e-9a01-4690-bc40-1df85a0e6736}\
       Shadow Copy Storage volume: (D:)\\?\Volume{5228437e-9a01-4690-bc40-1df85a0e6736}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 27,3 GB (10%)
    Shadow Copy Storage association
       For volume: (C:)\\?\Volume{b2951139-f01e-11e1-93e8-806e6f6e6963}\
       Shadow Copy Storage volume: (C:)\\?\Volume{b2951139-f01e-11e1-93e8-806e6f6e6963}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 6,80 GB (10%)
    C:\>
    There is something strange about the limits on the Secondary host!
    I have not in any way changed the settings on the Secondary host and as you can see, the Secondary host has a maximum limit of only 35 MB storage on the CSV, but it also shows that this is 10% of the Volume. This is clearly not the case since 10% if 600
    GB = 60 GB!
    The question is, why does it by default set a too small limit (i.e. < 320 MB) on the CSV and is this the cause of the problem? I.e. is the limit ignored since it is smaller than the smallest amount you can provide using the GUI?
    Is the default 35 MB maximum Shadow Copy limit a bug, or is there any logical reason for setting a limit that according to the GUI is too small?

  • Can not login into command line tool or check out application system

    Hi,
    I'm not able to check out application system. Message error stack is empty. If I try to connect using command line tool to purge the version I get this:
    Repository Command Line Tool: Production on 11-July-03, 18:46
    Release 9.0.2.90.21 - (c) Copyright 2002 Oracle Corporation. All Rights Reserve
    d
    REPCMD> conn repos_manager@msdes01
    Enter password: ******
    [JDK2] No message error
    oracle.repos.services.ReposServiceException
    at oracle.repos.services.connection.RepositoryConnection.setConnection(R
    epositoryConnection.java:613)
    at oracle.repos.services.connection.RepositoryConnection.setConnection(R
    epositoryConnection.java:539)
    at oracle.repos.tools.cmdline.util.CmdLineConnection.setConnection(CmdLi
    neConnection.java:162)
    at oracle.repos.services.connection.RepositoryConnection.<init>(Reposito
    ryConnection.java:333)
    at oracle.repos.tools.cmdline.util.CmdLineConnection.<init>(CmdLineConne
    ction.java:122)
    at oracle.repos.tools.cmdline.ConnectCommand.runConnect(ConnectCommand.j
    ava:197)
    at oracle.repos.tools.cmdline.ConnectCommand.exec(ConnectCommand.java:78
    at oracle.repos.tools.cmdline.BaseCommand.exec2(BaseCommand.java:130)
    at oracle.repos.tools.cmdline.CommandLine.run(CommandLine.java:494)
    REPCMD>
    Any help will be greatly appreciated.
    Thanks,
    Radek

    Radek,
    See Metalink article <Note:178311.1>
    I assume sqlplus and the Repository Object Navigator are working.
    Check the tnsnames.ora file:-
    i.e Change the entry
    DTE817.world = (DESCRIPTION = (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = ukp-desteam)(PORT = 1521)))
    (CONNECT_DATA = (SID = DTE817)))
    to
    DTE817.world = (DESCRIPTION = (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = ukp-desteam)(PORT = 1521)))
    (CONNECT_DATA = (SERVICE_NAME = DTE817)))
    David

  • Create Application system Dialog box poping up when ever designer is invoked, Then...

    Hi,
    we had recently installed designer in our systems but when we invoke designer we got Create Application system Dialog box poping up , Then when i give an application name the following error comes up
    RME-00011 Operation INS on CI_APPLICATION_SYSTEMS has failed
    RME-02124 Failed to execute Sql Statement
    begin
    errcnt := rmmes.getsize;
    rmmes.getall (:errutil,:errcode,:err1,:err2,:err3,:err4,:e rr5,:err6,:err7);
    end;
    RME-02105 Oracle error occurred
    ORA-06512 Line 157 in repos_user.rmmes
    ora-06512 at line 3
    ora-01403 No Data Found
    Please check for these errors and if possible mail your replies to the following email : [email protected]
    ASASP
    Thanking in advance
    Chan

    Hi Timo.
    In my application all the navigation rules are defined in faces-config.xml.
    But I found following statement in the "fusion developers guide"
    '**You cannot specify dialog:syntax in navigation rules within the faces-config.xml file if your Fusion web application uses ADF Controller features such as task flows. However, you can use the dialog:syntax in the control flow rules that you specify in the adfc-config.xml file.**'
    Since, my application is using faces-config.xml for defining navigation rules and using task-flow as well, I suspect , this is the root of my problem.
    Do u have any solution for it?
    Thanks
    Vikas

  • Creation Application System

    I installed Designer & createrd repository without any problems
    but when I tried to run designer I did not see any application
    system and when I tried to create new I got error could not
    find PL/SQL module. What shoul I do?

    Hi all again
    Sorry .... I found my way out ...wish installation instructions had been clearer!!
    Thanks anyway,
    Anita

Maybe you are looking for