Content Server 6.40 on deletion

Hello,
Our content server is a 6.40. It is installed on a Sparc Solaris 64-bits.
It works fine except that it crashes each time we delete an document.
Sap Support has asked me to reinstall apache 2.0.59.
The problem is still exists.
Then they have told me that apache2.0.x is not supported on Solaris10 64-bits.
I have then installed an apache1.3.42 with the sapcs apache module for apache 1.3.x.
But it still crashes.
While working with the sap supports on that, I would be very interested by your feedback if you could provide it on the following points.
1. Have you encountered similar problem with the Sap CS?
2.  Are you deleting document from your Content Server?
If yes, which OS are you using?
Thanks in advance for your answers.

Hello,
I provide a pstack for the one that would like more details.
pstack core
core 'core' of 5801:    apache1.3.43/bin/httpd
ffffffff7ee00be0 memcpy (10029d090, ffffffff7fffc478, ffffffff7fffc388, 1, 15, 8d) + 3e0
ffffffff7e522344 __1cMFSRepositoryNdeleteExtArea6MnGString__v_ (10029d090, ffffffff7fffc478, ffffffff7fffc668, 10029e430, 0, 1) + 74
ffffffff7e5221ac __1cMFSRepositoryPdeleteDirectory6MrnGString__v_ (10029d090, 10029d388, 0, 0, 1, 0) + 22c
ffffffff7e51e6c0 __1cMFSRepositoryJdeleteDoc6MrnGString__v_ (10029d090, ffffffff7fffccc0, ffffffff7fffc8ac, ffffffff7fffcd08, 1, 1) + 168
ffffffff7e4f71dc __1cKCSDocumentPendDel_multiple6M_v_ (ffffffff7fffcb28, ffffffff7fffe900, ffffffff7fffd160, 10028b800, 0, 18) + 94
ffffffff7e4fe13c __1cOCSHttpDocumentDdel6M_v_ (ffffffff7fffcb28, ffffffff7fffe900, ffffffff7fffd640, 1002820c0, 1, 1) + 7c
ffffffff7e508810 __1cHCSQdDueryJdeleteDoc6M_v_ (ffffffff7fffdbb8, ffffffff7fffd688, ffffffff7fffd640, 100281ce0, ffffffff7fffe900, 0) + 80
ffffffff7e50cb28 __1cSKPROQdDueryValidator4nHCSQdDuery_nHCSStats__SvalidateAndProcess6M_b_ (ffffffff7fffde30, 40, ffffffff7e90a330, 1, 0, 1) + c90
ffffffff7e504600 __1cHCSQdDueryHprocess6M_b_ (ffffffff7fffdbb8, ffffffff7fffde30, ffffffff7fffdd38, ffffffff7fffe8f8, ffffffff7fffeb90, 1001f1310) + 248
ffffffff7e4dda14 __1cOcs_req_handler6FpnLrequest_rec__i_ (1002315d0, ffffffff7e7cb32a, ffffffffffffffff, 6c65006d, 64756c65, ffffffff7fffeb90) + 6f4
000000010000f960 ap_invoke_handler (1002315d0, 1f4, 0, c, 1001d6800, 100191008) + 128
000000010002e118 process_request_internal (1002315d0, 1, 40, 52, ffffffff79500617, 0) + 5f0
000000010002e1b8 ap_process_request (1002315d0, 4, 1002315d0, 0, 1002315d0, 1002315d0) + 30
00000001000220e4 child_main (6, 10001f610, 0, ffffffff7f1461e0, 0, ffffffff7f300200) + b7c
00000001000224d0 make_child (10018b340, 6, 4c03bd47, 0, ffffffff7ffffac0, 1000752e0) + 1a0
0000000100022bfc perform_idle_server_maintenance (ffffffffffffffff, 1548, d, 10018b340, 1000752e0, 1001819d8) + 55c
00000001000236d4 standalone_main (1, ffffffff7ffffd98, 10017c140, 0, ffffffff7f1425a4, 0) + 6bc
00000001000242fc main (1, ffffffff7ffffd98, ffffffff7ffffda8, ffffffff7ef4b820, ffffffff7f100100, ffffffff7f300200) + 76c
000000010000899c _start (0, 0, 0, 0, 0, 0) + 17c

Similar Messages

  • Deleted attachments are not removed from Content Server

    We have setup Content Server to store business documents and create attachments in documents in CO. Both types are stored ok and can be opened without any problems. After deleting them though, they disappear from the Attachment List but when looking in CS, attachments created on the document still exist in CS. Stored business documents on the other hand are removed properly.
    Looking at statistics in CSADMIN shows that the deletion of the created attachment does not increment the "delete" counter.
    Any ideas on why the created attachments are not removed from CS upon deletion?
    Thanks

    We have solved this issue now. By design, the attachments are not removed directly from the content server upon deletion; Instead you will need to run report RSBCS_REORG to completely remove them from the Content Server.
    Edited by: Christian Nordvaller on Jan 26, 2010 3:53 PM

  • Recover deleted document from Oracle content server

    Hi All,
    I've deleted some of documents from oracle content server mistakenly. I am using oracle ucm11g.
    I found that we have a feature called "Trash bin". Trash-bin used for recover the deleted document/folder but unluckily settings for "Trash Bin" is disabled in my UCM folder configuration.
    Is there any other way to recover?
    Please kindly help me on this. It's an urgent production issue, please.
    Thanks for your great support in advance.

    Is there any other way to recover?
    Try to take a look at Repository Manager admin application: http://docs.oracle.com/cd/E21764_01/doc.1111/e10978/c03_repository.htm#DAFCGDIE
    If you still can see your items there, you could recover the status.
    If not, I'm afraid your documents are gone from UCM. In that case, you might recover them from a back-up. There is also a chance that documents are still present in the Vault directory, so rather than 'recover', you might 're-submit' them.

  • Original deletion from content server once we delete DIR

    Hi,
    I'm deleting DIR using transaction CDESK. Will this method, delete original which is checked in content server?
    Regards,
    Yogesh

    Hi Yogesh,
    Yes,successful execution of the CDESK transaction for deletion ensures that the DIR and the meta data is deleted from the database and all the associated original files are deleted from the database/Content Server depending upon the storage location opted for.
    Regards,
    Pradeepkumar Haragoldavar

  • Archiving to SAP Content Server/MaxDB: TCP/IP errors when deleting in paral

    Dear experts.
    We are struggling to get the optimal setup for archiving to SAP Content Server using MaxDB. After running one or more archiving jobs for a few hours we suddenly loose connection to the Content Server, and we need to restart the content server before continueing. We long thought it was the logspace which was running full, but this was eliminated. Now it seems as the error occurs when 4 or more deletions jobs is running in parallel. (Write->Store->Delete).
    The run-times on a typical archiving job for about 3 GB data would be W: 3000sec S:40sec D: 12000sec. In general the delete job takes about 3 times the write-job.
    Everything seems fine until one suddenly get the TCP/IP error after some heavy archiving jobs.
    Have anyone experienced anything like this, and are there any good ideas on how to avoid this?
    Thank you very much for your inputs on this.

    We were running 3 parallell delete jobs yesterday, and it all went fine until I started the 4th job, then all jobs stopped immediately. The KNLDIAG file at the exact time of the problem looks like this:
    2008-10-08 22:57:48      0x918     19617 DEVIO    Single I/O attach, 'D:\sapdb\SDB\sapdata\DISKD0025', UKT:8
    2008-10-08 22:57:48      0x918     19617 DEVIO    Single I/O attach, 'D:\sapdb\SDB\sapdata\DISKD0003', UKT:8
    2008-10-08 23:31:14      0x918     19637 CONNECT  'vreceive', COMMAND TIMEOUT, T74
    2008-10-08 23:31:14      0x914     19637 CONNECT  'vreceive', COMMAND TIMEOUT, T72
    2008-10-08 23:31:14      0x918     19651 CONNECT  Connection released, T74
    2008-10-08 23:31:14      0x914     19651 CONNECT  Connection released, T72
    This morning I just ran another test with 2 jobs which also failed, and the two job logs contains:
    09.10.2008 07:06:41 Archive file 000851-001EC_PCA_ITM is being verified                           
    09.10.2008 07:16:54 Archive file 000851-001EC_PCA_ITM is being processed                          
    09.10.2008 07:16:55 Starting deleting data                                                   
    09.10.2008 07:36:28 Connection to http://192.1.4.5:1090/ContentServer/ContentServer.: TCP/IP error
    and
    09.10.2008 07:13:30 Archive file 000851-010EC_PCA_ITM is being verified                                 
    09.10.2008 07:20:55 Archive file 000851-010EC_PCA_ITM is being processed                                
    09.10.2008 07:20:56 Sletting av data begynner .                                                         
    09.10.2008 07:39:28 Connection to http://192.1.4.5:1090/ContentServer/ContentServer.: Time limit exceeded
    and the KNLDIAG file looked like this:
    2008-10-09 07:11:34      0x914     19617 DEVIO    Single I/O attach, 'D:\sapdb\SDB\sapdata\DISKD0016', UKT:7
    2008-10-09 07:11:34      0x914     19617 DEVIO    Single I/O attach, 'D:\sapdb\SDB\sapdata\DISKD0023', UKT:7
    2008-10-09 07:11:35      0x90C     53040 SAVPOINT (3) Stop Conv I/O Pages 2119 IO 265
    2008-10-09 07:11:35      0x90C     53071 SAVPOINT B20SVP_COMPLETED: 689
    2008-10-09 07:12:26      0x918     19633 CONNECT  Connect req. (T74, Node:'', PID:4272)
    2008-10-09 07:12:26      0x918     19617 DEVIO    Single I/O attach, 'D:\sapdb\SDB\sapdata\DISKD0001', UKT:8
    2008-10-09 07:12:26      0x918     19617 DEVIO    Single I/O attach, 'D:\sapdb\SDB\sapdata\DISKD0015', UKT:8
    2008-10-09 07:12:26      0x918     19617 DEVIO    Single I/O attach, 'D:\sapdb\SDB\sapdata\DISKD0004', UKT:8
    2008-10-09 07:12:26      0x918     19651 CONNECT  Connection released, T74
    2008-10-09 07:12:35      0x918     19633 CONNECT  Connect req. (T74, Node:'', PID:4272)
    2008-10-09 07:12:35      0x918     19651 CONNECT  Connection released, T74
    2008-10-09 07:12:46      0x918     19633 CONNECT  Connect req. (T74, Node:'', PID:4272)
    2008-10-09 07:12:47      0x918     19651 CONNECT  Connection released, T74
    2008-10-09 07:31:47      0x914     19637 CONNECT  'vreceive', COMMAND TIMEOUT, T72
    2008-10-09 07:31:47      0x914     19651 CONNECT  Connection released, T72
    2008-10-09 07:47:20      0x914     19633 CONNECT  Connect req. (T72, Node:'', PID:4272)
    2008-10-09 07:47:20      0x914     19651 CONNECT  Connection released, T72
    2008-10-09 07:49:01      0x914     19633 CONNECT  Connect req. (T72, Node:'', PID:4272)
    2008-10-09 07:49:01      0x914     19651 CONNECT  Connection released, T72
    2008-10-09 07:49:27      0x910     19637 CONNECT  'vreceive', COMMAND TIMEOUT, T70
    2008-10-09 07:49:27      0x910     19651 CONNECT  Connection released, T70
    We have not yet looked at the tips regarding the windows errors, but will do that when we get the possibilty. In the mean time; do you read anything from this above? It looks like when several jobs are crashing at the same time there is one which has the "Time limit exceeded" and the rest TCP/IP error.
    Could it possibly be the MAXUSERSESSIONS parameter, or are you still leaning towards something outside the database?

  • Delete a document permanently & Content Server Partition with authorization

    Hi
    Can you help me regarding the below requirement.
    Is there any possibility to create the folders in content server . Our client want to save same type of documents in one folder (eg: folder1) and authorization to be given to particular users to use that folder (folder1).
    Regards
    Harshini

    Hi Harshini,
    from DMS point of view I think such a folder construction is not available for the Content Server. In DMS you can maintain different content repositories and storage categories in transactions OACT and OAC0.
    These storage categories could be seen as a kind of folder and with the help of profiles in the DMS customizing you can also control which kind of files should be put in a specific storage category based on the workstation application.
    I hope that this information could be useful for you.
    Best regards,
    Christoph

  • Error on update of document stored in content server

    Error on update of document stored in content server
    On a regular basis (but not reproducible) we find that after updating a document, it is deleted from content server (or at least it cannot be retrieved).  These problems have only been experienced since we switched to using content server as our storage repository, as opposed to R/3.
    We create and maintain documents through a bespoke transaction, which calls standard SAP functions BDS_BUSINESSDOCUMENT_CREA_TAB and cl_bds_document_set=>update_with_table.
    Whilst the errored documents are listed in the BDS via transaction OAOR (business document navigator), an error is received when you try to display it (in our case an MS-Word error indicating file/pathname invalid). 
    We are satisfied that file/pathname are valid and find that this occurs occasionally when a document has been updated.  It appears that the document has been deleted. 
    This bespoke transaction has been running successfully for almost two years, and these problems have only been experienced after switching to content server as a storage repository (as opposed to R3 previously).  Has anyone else experienced these problems? 
    We are running :
    R/3 Enterprise 620,
    SAP HTTP Content Server Version 6.30 Patch 13
    SAPDB version 7.3.0.54

    Hi Sonny,
    To check the connectivity between your content server and Workstation and SAP Server.
    Pls goto the command prompt of your workstation
    give the command like this example.
    C:\>Ping 117.123.45.201
    you will get the reply from the server. here 117.123.45.201 is your content server IP.
    If you are getting the reply then it means that your contentserver and workstation are connected propely.
    Like that pls check the connectivity between your systems.
    Pls check the hosts file of your systems also.
    If the hosts file entry is not maintained, you can check-out file from content server but you cannot check-in the original.
    Pls let me know what kind of error Message you are getting?
    From where you are trying to check-in the Original? From the DIR screen or from CAD Desktop screen?
    Regards,
    MRK
    (reward points if useful)

  • Error while starting the content server

    Hi,
    I have installed UCM and when i start the content server, it says it was succesful but logs show the below error.
    Unable to publish the schema. Published schema directory could not be swapped into its proper location. [ Details ]
    An error has occurred. The stack trace below shows more information.
    !csSchemaUnableToPublish!csSchemaFailedToMovePublishedFilesIntoPlace
    intradoc.common.ServiceException: !csSchemaFailedToMovePublishedFilesIntoPlace
         at intradoc.server.schema.StandardSchemaPublisher.doPublishing(StandardSchemaPublisher.java:349)
         at intradoc.server.schema.StandardSchemaPublisherThread.run(StandardSchemaPublisherThread.java:252)
    ANy idea wats going on.
    I have done the required changes in the confd of the webserver, Content server is accessible from the webpage. I am also able to login as sysadmin.
    Not sure wats going wrong.
    Thanks

    You can sometimes get issues on the file system with file locking occurring and so UCM can not clear up/move the schema files properly.
    It may not be this but...There should me a metalink note about this deleting schema directories.
    Tim
    Edited by: Tim Snell on 06-May-2010 13:00

  • Creation of new entity on ADEP content server fails

    Hi. 
    I get the following error when trying to persist a new instance of an entity into the content (experience?) server:
    Create Item Failed, RepositoryException: {Code}-LCC-REP-FCT-002, {Message}-Access denied
    The new instance is defined in a Flex GUI and connected to the content Server through Data Services.  The error occurs when I try to develop the simple app shown in
    Session #9: Using Model Driven Development with Persistence in the ADEP Content Repository (http://www.youtube.com/watthech?v=rNnLC8fADlc).
    I have loaded the sdk, samples and cmis packages.  RDS and Unsecured RDS are enabled on the server.  The Deployment Handler is set to content on the Flex GUI.
    I think it fails becuase the app does give appropriate security creditials to the server, but don't know how to do so (as commented by Hodmi on the YouTube page).
    Would appreciate any help to make progress as I am keen to see what ADEP can do.
    Regards
    Jamal

    Jamal-
    Check out this link: http://learn.adobe.com/wiki/display/ADEPSamples/Model-driven+Development
    In the Run the Application section, the step I've copied below will solve your issue.  I actually did not change the permissions on the "content" node (step 4), but selected my application node under content, and that also worked.
    Have fun with ADEP!
    -Matt
    PS-Thanks to Gary for pointing me in the right direction for this answer.
    To create node or delete node when running the application, you can  use anonymous user but you need to configure the ACL for the user  according to the following steps:
    Open http://<server_name>:<server_port>.
    Enter admin/admin as user name/password, click Sign In.
    Click Content Explorer in the right side panel.
    Click the node “content” in the left tree.
    In the top bar, click Security > Access Control Editor.
    Click the check box next to ACL, then click Set selected policies.
    Click New ACE, and then click Browse, select user, and click Search Now, select anonymous, and click select.
    In Privileges list, select jcr:all.
    Click the Green arrow, and then click OK
    Note: If you want to add  authentication for the application, the login page should be needed, and  you can refer to the detailed sample code to learn how to add  authentication to such applications.

  • Will Adobe Reader 7.0.9 or 7.1.0 work with Content Server 4?

    Hello Adobe,
    I have a Toshiba Satellite A75-S213 laptop, Intel Pentium 4, 1 GB RAM, Windows XP Home.
    I've purchased ebooks from SimonSays.com several times over the course of 2008; each time, my Adobe Reader 7.0.9 was able to open the "ebx.etd" file and able to access the ebooks without any problems.
    However, the last purchase I made around a week ago keeps leading to me downloading ".acsm" files, which Adobe Reader 7.0.9 (or Adobe Reader 7.1.0) doesn't recognize and can't open.
    I contacted SimonSays.com and they suggested that I download Adobe Digital Editions. I don't know if that will solve the problem or not.
    I've done some of my own research and found that between my last purchase in early September and the one I made in early October, Adobe changed their Content Server from Version 3 to Version 4? Does that explain why I can't even download the "ebx.etd" files from ebooks I've already downloaded in the past, but instead, get ".acsm" files instead for those ebooks that I've downloaded before?
    I take it then that Adobe Reader 7 (any version) can no longer read ebooks you purchase from retailers such as SimonSays.com and that you have to use Adobe Digital Editions instead? I even upgraded from Adobe Reader 7.0.9 to Adobe Reader 7.1.0, thinking that would do the trick, but it's still downloading ".acsm" files.
    I did authorize Adobe Reader back in January (which is why I could download and read the ebooks from my previous purchases); I know I ran antivirus and anti-spyware software and wasn't sure whether I deleted something that might have caused this problem, but I know that I've run those same programs and never had this problem before now. Do I have to reauthorize Adobe Reader somehow, and if so, how? I log into Adobe and get that "Eden.ui" zipped file, but putting that into the folder with the ".acsm" file doesn't allow Adobe Reader to open it and read it.
    Therefore, is there any way to keep using Adobe Reader 7.1.0 to read my future ebook purchases or is the only way to read ebooks from now on is via Adobe Digital Editions? And if I download Adobe Digital Editions, is it better to keep Adobe Reader 7.1.0 on my system or should I upgrade to Adobe Reader 8 or 9?
    I'd greatly appreciate any and all advice on this, as I can't open ebooks I've purchased, and I'm pretty sure there are no refunds for ebook purchases (though I'd prefer to be able to purchase and read ebooks in the future) - thank you.
    Please take care and have a great day!
    Sincerely,
    Joseph Chengery

    Hello,
    You will need to install Adobe Digital Editions which will solve this issue. .acsm files only open with Digital Editions. Also, if you import all of your previously purchased books with Digital Editions, they will be migrated automatically. Digital Editions can be downloaded for free from:
    http://www.adobe.com/products/digitaleditions/
    Best,
    Nick

  • SAPDB Migration for Content server

    Hi Experts
    we have a SAP content server 6.3 running on 32 bit windows on SAPDB 7.3 .
    We are planning to move this to a Unix box ( AIX or Solaris .. not decided yet). Is there a comprehensive guide available about the procedure we need to follow ?
    1. Do we need to upgrade the source DB before migration ?
    2. As per MaxDB admin guide database copy from IA32 to unix does not work and as per note 962019 we need to use loadercli. Any tips on loadercli use would be welcome
    3. Any relevant information from past experience of similar exercise
    4. Does SAP support this migration if we do not buy the MaxDB migration service from SAP ?

    Hi there,
    > Note"962019" mentions that:
    >
    > "Note that the target system must be installed at least with 7.6.05 build 11. An upgrade to this version does not help. Stop the target system (shutdown) and delete the database instance (drop database). SAPinst provides the option 'uninstall'. Here, you are led through the uninstalling process. The repeat the software installation with 7.6.05 build 11 or higher."
    Yep, this is necessary to make the handling of source databases possible, that still have the _UNICODE parameter set to FALSE.
    > I can only find MAXDB 7.6.03 installation CD on SAP SWDC, no 7.6.05 build 11 or higher, can I install 7.6.03 first, then update patch to 7.6.05 or higher?
    Actually you don't need the installation CD.
    The "patch" package contains everything you need to perform a full installation.
    MaxDB patches are not really patches but always full installations.
    Therefore you can either use the SDBSETUP or SDBINST tool (make sure to get the path-settings for <indep_data>/<indep_programs>/<dependend_programs> right then!) or you replace the installation package on the 7.6.03 CD you have with the 7.6.05 installation package from the patch download page in the service marketplace [http://service.sap.com/swcenter-3pmain].
    Ok, hope that answers your questions.
    Make sure to test the procedure before actually trying to perform the migration.
    Very often questions and problems come up during the first time the process is done - and if that happens to be your productive migration weekend, well, cross-fingers that you get a MaxDB expert supporter on weekend duty then ...
    regards,
    Lars

  • Error connecting to Content Server

    Hello,
    we are in the process of upgrading ERP6.0 EHP5 to EHP6 which is also an upgrade to NW702 to NW731. We made a system copy of the productive system and called it XXX and did all the post-systemcopy works that need to be done including creating the System PSE. We connected this XXX system to the Content Server and created new respositories and send the certificates to the CS. Everything was working fine. This wall still on NW702
    After the upgrade we get errors like:
    When trying to read/save a file, this error comes up:
    X-ErrorDescription: "Security SsfVerify failed rc=5, lasterror=1, This key
    type is not supported, PSE=\\?\E:\Program Files\SAP\Content Server\Security
    \<REPOSITORYNAME>.pse,"
    When accessing the CSADMIN via OAC0, the password windows opens and at the bottom the  Error "http 401 (unauthorized) Permission denied: adminContRep&configGet" occurs.
    I tried sending the certificate again-> still error
    I tried to delete the .certs and .pse files in ContentServer\security as suggested in note 1800664-> still error
    There was an old note saying that the CS needs to be patched because secude.dll is too old but even the latest patch only has the secude.dll from 2009 and there is no update. (At least I couldn't find any, probably because it is sapcrypto now)
    We are using CS Patch 17. Since everything was working before, I do not see the reason to deploy a new patch.
    The system PSE was created with algorithm RSA with SHA-1 and 2048 byte.
    Does anyone have an idea what to try next?
    Regards
    Andreas

    Hi Andreas,
    Have you registered content server port number 1090. check in services if not registered
    Please follow below steps.
    1.In 'Administrative Tools' open 'Windows Firewall with Advanced Security'
    2.Right click over 'Inbound Rules' and select 'New Rule...'
    3.In the Rule Wizard select 'Port' and click on 'Next'.
    4.Enter the port number you set for the Content Server, the default port is 1090,
    and click 'Next'.
    5. Select 'Allow the connection' and click 'Next'.
    6. Select the Profile for your network and click on 'Next'.
    7. Enter the name of rule, e.g. 'Content Server' and click on 'Finish'
    Hope that maintained host file also.
    Regrads,
    chandu.

  • Content Server refresh

    guys,
    we are planning a 3 system landscape here for ECC - DEV, QA and PROD. And a two system landscape for Content server - DEV and PROD.
    I am planning to connect R/3 DEV and QA to DEV content server (2 different repositories) and R/3 PROD to PROD content server. Now, if i think about it, whenever i refresh R/3 QA with the R/3 PROD data, the document links in QA wouldnt make anymore because the data in DEV content server wouldnt match anymore. So does this mean I will also have to refresh the DEV content server with PROD content server everytime I refresh R/3? Is there an easier way out ?
    Please advise as to what you all have seen or done in other projects.
    thanks,
    RS

    Hi,
    I had the same scenario in the past and I did below things:
    1) Dev and QA system connected to same content server but with different repositories.
    2) Repository of QA system has same name as production server repository.
    3) So, when we refresh QA from PRD then same content repository will come in the quality system, you just need to change content repository settings in QA system. After do the below step:
    Relocating Using Export and Import
        Ensure that no new data is saved to the affected repositories during the relocation procedure.
        Use report RSCMSEX to export the repository.
        Create a repository of the same name on the target content server and change the Customizing of the repository so that it points to the target content server.
        Import the data into the target repository using report RSCMSIM.
        Delete the relocated repositories from the source content server.
    The program sapkprotp controls the export and import. sapkprotp is started on the application server by default. If required, you can start sapkprotp on the content server or on another server. To set this up, enter an RFC destination when starting the report. Set the path of the transport file relative to sapkprotp.
    You should use this option if sapkprotp is not available on the application server (so far, sapkprotp is not available for AS/400). You should also use it if the content server is at a remote location, to avoid the data being transferred twice over the WAN.
    Thanks
    Sunny

  • Content Server config for production system - oac0

    Hi all
    We are setting up content server for our ERP 2005 system, and I am wondering what best practice is for our scenario.
    We have one content server (CST) for development and test systems on host A, and one content server (CSP) for production system on host B. The Archive link configuration in dev and test points to CST. When this is transported to production, it will point to the wrong content server and I will have to reconfigure the archivelink in production to point to the correct content server (CSP).
    What is the best way to set up the production system to point to the content server CSP on host B? Should I define a new content repository in t-code oac0 in dev, and transport this to production? Should I define the new repository directly in production system??
    Please advise.
    Best Regards,
    Thomas

    Hello,
    I had the same problem a few years ago with R/3 4.7.
    I don't think there is a perfect solution but here is what I did :
    In the DEV system, in OAC0, I created 2 content repositories, one for the test sytem and one for the production. I transported these 2 content repositories in the production system.
    In the DEV system, in OAC3, I created temporary links from my business objects to my production content repository. I released the request order.
    Then I deleted these links which are wrong in the DEV system. I transported the order in the R/3 production system where these links are right.
    In the DEV system, in OAC3, I then created  the links from my business objects to  my TEST content repository. I released the second request order.
    So my config is now OK in my test and my production system.
    When we refresh the DEV system by a database copy from the production system. We just have to reimport the test request order to switch the links to be OK on the DEV system. We don't have to recreate the test content repository as it was defined als oi nthe production system.
    I am not sure that this is clear but this works for us with no problems since 2002.
    I hope this helps.
    Olivier

  • Questions for Content Server

    Dear All;
    l have setup the Content server in Windows 2003 Standard Edition using the filesystem storage (not the MaxDB database), and the size of the document could be approximately 80GB (for the next few years), l didn't setup the Cache server. 
    All the attachment in the ECC will store into this External SAP Content Server.
    Question 1:
    Can anyone advise whether the Content Server(filesystem) will be having performance issue while store/retrieve documents?  Is there any way to improve the performance of the Content Server?
    Question 2: 
    Security. what are the necessary security option that l need to configure for the Content server? at the moment, the Windows server still configure as full access for the Content Repository folder.  Otherwise the ECC will be having access denied problem when communicating with the Content Server.
    Question 3:
    There are some attachment in ECC, how to l move those documents to Content Server?
    Please advise.
    Many thanks
    Jordan

    When I say use certificates, there should be a checkbox with "check signature". This ensure that only sap machines with an active certificate can access the content server.
    On the file system, I would ensure that no users have access to the file system through the backend. If they can modify documents, you will be sitting with major problems from a document integrity and legal perspective.
    On the issue of migration, there is an oss note for the migration of data from one repository to another for archivelink.
    Check note number 1043676
    Symptom
    You have created a new repository, and would like to migrate the ArchiveLink Documents to the new repository.
    Other terms
    Migration of documents, ArchiveLink, repository
    Reason and Prerequisites
    This report helps migration of ArchiveLink documents.
    To use this report effectively, you would need to ensure that you apply the note 732436. However, this note will allow the migration of documents from and to type of repositories which are supported by ArchiveLink. The types of repositories that are supported by ArchiveLink are HTTP, RFC and R/3 Database.
    Solution
    Before you execute the report, you must set up a repository that can
    store the documents (transaction OAC0). The repository may be in an
    external archive(connected via  HTTP or RFC) or (as of Basis Release 6.10) in the OLTP database of the SAP system. You can also use SAP Content Server as an external archive for storing documents.
    The report has the following parameters:
      OLD_ARC ID of the old repository previously used
      NEW_ARC ID of the new repository
      TEST_RUN: If you enable this, only a test run occurs, copy of       documents does not occur.
      DEL_FILE: If you enable this, then the files would be deleted from the old repository.
    Launch the transaction SE38 and create a program with the name ZMIGRATE_ARCHIVELINK_FILES. Once the program is created, now copy the code from the correction instruction.

Maybe you are looking for

  • Nested AGO function in OBIEE

    Hi , I am not able to understand the reason behind using Nested AGO functions. In one of the logical columns (Sales LastYear LastWeek) the mapping was in the following fashion: Ago(Ago("Core"."Fact - Retail Inventory Receipts"."Receipts Retail" , "Co

  • Forwarding from servlet in web app...

              I'm having a problem forwarding to a page within a web application. Basically if I do:           getServletContext().getRequestDispatcher("/login.htm").forward(request,response);           then the page isn't found as the forward doesn't pu

  • Dynamic LBFO Load Balancing mode causing issues

    Hi, We`re running a couple of virtual machines with the BIG-IP Virtual Edition in a Windows Server 2012 R2 Hyper-V cluster. These virtual machines have had problems where traffic sent through the virtual machines doesn`t get through due to the MAC Ad

  • General Error + export

    Hi, am not having any success . project is finished & I cant get it to export as a QT for the online delivery. sequence settings are 1920X1080p, 23.98, ProRes HQ export using current settings in FCP 7 project (MacPro 5.1 with 24 GB RAM). Have had thi

  • Acrobat will not open a form

    This is a weird one. As of last week, my acrobat pro will not open a "form". Unless I disconnect from the network, then it opens fine. The form can be opened on any other computer in the company but not this one, if it's connected to the network. Wha