WLP92 - BEA VCR CMS with Filesystem Repository has a bug?

We have detected a problem when the BEA Virtual Content Repository has been
configured to use Filesystem Repository.
The problem arises when we try to create a second html-document to some
content folder where there already exists an html-document (file.html).
Namely, we can't find a way rename the automatically created filename
(file.html) for any of the files. This lack causes duplicate error when we
try to create the second html-document.
Steps to repeat the problem:
- change the repository to use Filesystem Repository:
1) change Connection Class to
com.bea.content.spi.internal.FileSystemRepositoryImpl
2) add property cm_fileSystem_path = c:/temp
- add new content to a folder, e.g. a document of type 'article', choose
"Create Document", enter the content and save it.
- repeat the previous step -> A RepositoryException was thrown:
com.bea.content.NodeExistsException: Node: file.html already exists.
This problem seems to us so obvious that there must be some fix for it or a
way to circumvent it (otherwise than not to use Filesystem Repository and
use Default Repository instead).
Has anybody a working solution for this problem?
Regards,
Juha

CR290497 fixes this bug.

Similar Messages

  • BEA intercepts response with "An error has occurred"

    Hello,
    <p>
    I am using BEA 8.1 SP6 on Solaris. I also have my portlets running over WSRP (both consumer and producer are teh same BEA version). I have an odd behavior which hopefully someone can explain.
    <p>
    Scenario: I do an AJAX call to a servlet which then makes a web service call to another server. The actual web service call is embedded within a try/catch block.
    <p>
    If the web service is not running, for example, my catch block catches a ConnectionException error.
    <p>
    In the situation I have run into, however, the problem is that my web service call goes out over the network and it will eventually timeout. I know it will timeout because my catch block debug tells me that I get a connection timeout. My client never gets that information, however, because sometime during that process BEA has intercepted the response object and sends back the following HTML response:
    <p>
    -----------------------<br>
    <\!-- Generated by Weblogic Workshop -->
    <!DOCTYPE HTML PUBLIC"-//W3C//DTD HTML 4.01 Transitional//EN">
    <html="en">
    <head>
    <title>Error</title>
    </head>
    <body>
    <p>
    An error has occurred:
    </p>
    <blockquote>
    <span></span>
    </blockquote>
    </body></html>
    <\!-- Some browsers will not display this page unless the response status code is 200 -->
    <br>---------------------------<p>
    Again, my server is still processing the web service call and will eventually catch a timeout error but that info does not get back to the client because BEA has hijacked the response.
    <p>
    Does anyone know why or how this is happening?
    <p>
    Thanks - Peter Len

    I believe I have found the issue.
    Our portlets are being accessed over WSRP. Both consumer and producer portals are BEA 8.1 SP6. The problem was that the HTTPS call back to the producer timed out on the consumer so it was the consumer portal that hijacked the response to the producer request.
    In the consumer's WEB-INF/wsrp-producer-registry.xml file, there is the element "connection-timeout-msecs". The default is 60000 ms. When I switched that to a larger time (say 999999 ms), I did not see the problem.

  • How to configure third party repository to Bea VCR adapter?

    Hi,
    I tried to configure the Bea VCR adapter as given in the document http://edocs.bea.com/wlp/docs92/pdf/day170adapter_developers_guide.pdf. But I haven't succeeded.
    I am implementing the JSR170 level1 API for a content server which having it's own api to interact to his repository.
    But I am not so clear about connection part between the JSR170 implementation and bea adaper com.day.content.spi.jsr170.JNDIRepository.
    What is jsr170.jndi.name attribute? Why do I need this one.
    Where can I get api doc for the package com.day.content.spi.jsr170 or for the class com.day.content.spi.jsr170.JNDIRepository?
    Suppose if the bea portal server and 3rd party content server are running at different JVMs(in diffrent machines) and still If want use in-process JNIDRepository adapter class, do I need to deploy an ejb bean at content server which returns RepositoryImpl instace?(assuming that content server it self deployed in j2ee server).
    Please correct me if my understanding is wrong and also plese give me more details.
    Thanks

    Hi,
    I tried to configure the Bea VCR adapter as given in the document http://edocs.bea.com/wlp/docs92/pdf/day170adapter_developers_guide.pdf. But I haven't succeeded.
    I am implementing the JSR170 level1 API for a content server which having it's own api to interact to his repository.
    But I am not so clear about connection part between the JSR170 implementation and bea adaper com.day.content.spi.jsr170.JNDIRepository.
    What is jsr170.jndi.name attribute? Why do I need this one.
    Where can I get api doc for the package com.day.content.spi.jsr170 or for the class com.day.content.spi.jsr170.JNDIRepository?
    Suppose if the bea portal server and 3rd party content server are running at different JVMs(in diffrent machines) and still If want use in-process JNIDRepository adapter class, do I need to deploy an ejb bean at content server which returns RepositoryImpl instace?(assuming that content server it self deployed in j2ee server).
    Please correct me if my understanding is wrong and also plese give me more details.
    Thanks

  • HT5318 Ever since I updated what looked like iTunes 10.6.1 my computer has been corrupted with Malware, which has filled up all available space on my hard drive.  What do I do to get rid of the Malware?

    Ever since I updated what looked like iTunes 10.6.1 my computer has been corrupted with Malware, which has filled up all available space on my hard drive.  What do I do to get rid of the Malware?

    First, reboot. That will temporarily free up some space. According to Apple documentation, you need at least 9 GB free for normal operation. You also need enough space left over to allow for growth of your data.
    Use a tool such as OmniDiskSweeper to explore your volume and find out what's taking up the space.
    Proceed further only if the problem hasn't been solved.
    ODS can't see the whole filesystem when you run it just by double-clicking; it only sees files that you have permission to read. To really see everything, you have to run it as root.
    First, back up all data if you haven't already done so. No matter what happens, you should be able to restore your system to the state it was in at the time of that backup.
    Launch the Terminal application in any of the following ways:
    ☞ Enter the first few letters of its name into a Spotlight search. Select it in the results (it should be at the top.)
    ☞ In the Finder, select Go ▹ Utilities from the menu bar, or press the key combination shift-command-U. The application is in the folder that opens.
    ☞ If you’re running Mac OS X 10.7 or later, open LaunchPad. Click Utilities, then Terminal in the page that opens.
    After installing ODS in the Applications folder, drag or copy — do not type — the following line into the Terminal window, then press return:
    sudo /Applications/OmniDiskSweeper.app/Contents/MacOS/OmniDiskSweeper
    You'll be prompted for your login password, which won't be displayed when you type it. You may get a one-time warning not to screw up.
    I don't recommend that you make a habit of this. Don't delete anything while running ODS as root. When you're done with it, quit it and also quit Terminal.

  • [repo_proxy 13] SessionFacade::openSessionLogon with user info has failed(Transport error: Communication failure.

    Post Author: LeeCUK
    CA Forum: Authentication
    I have installed Enterprise XIR2 onto a Windows 2003 server and all services are running.I have installed the client on another windows 2003 server and I can connect to the console via the web.
    Firstly desktop intel would not work and gave me the error below, however a new entry appeared in my server dropdown with servername (.Net Portal) and after selecting this Desktop intell worked.
    When I try to run designer I dont get the extra server in the dropdown nor does it connect when tryped manually and gives me the same error I was getting for desktop intel.
    Im using enterprise authentication
    Has anyone figured this out please?
    The error im getting is below&#91;repo_proxy 13&#93; SessionFacade::openSessionLogon with user info has failed(Transport error: Communication failure.(hr=#0x80042a01)

    Post Author: jsanzone
    CA Forum: Authentication
    LeeCUK:
    The &#91;repo_proxy 13&#93; error seems to be a catch-all for BO in that I've worked with this software (XI R2) for 1 1/2 years now and each time I've encountered &#91;repo_proxy 13&#93; it has been for a different reason, thus, you're going to have to apply troubleshooting checks all over the place until you find the problem. In my particular case I just resolved my &#91;repo_proxy 13&#93; error I've been working on this week, and here was my scenario.  Our server in the lab died due to a hard drive failure, so we cut our losses and installed a new & more powerful server.  When the rebuild was done I could not connect from my laptop to the repository via Designer (good old &#91;repo_proxy 13&#93; error).  After much angst, etc, etc I found out that the servers in the CMC (/businessobjects/enterprise115/admin/en/admin.cwrwere"&gt;http://<server>/businessobjects/enterprise115/admin/en/admin.cwrwere) were all down except for one (even though CCM showed all servers "up").  When I went to start the servers in the CMC I was prompted for an account, and no matter what account I tried it won't work.  So I went back to the CCM servers and stopped them all, then changed the default account it was using from LocalSystem to the administrator account, restarted all servers in CCM, went back to CMC and noted that all servers were now up as well.  Then I went back to my laptop and used Designer and was able to successfully log in.  Thus, the end of my story, however, it seems that many &#91;repo_proxy 13&#93; errors have different beginning and endings.  For instance, a year ago I had the same &#91;repo_proxy 13&#93; error and brought it to BO tech support.  They determined that I had to uninstall my client from my laptop, clean out the registry for any references to BO keys, and remove the c:\program files \Business Objects  folder, then reinstall, and sure enough it worked in that case, but not in this case when I tried it again.  By digging deeper I found out the root cause to my current problem, and voila, I'm in!
    Good luck!

  • Changing CMS and Audit Repository databases from Oracle to SQl server 2008

    Hi guys,
      We have a Business Objects Dev environment which was installed with Oracle 10g database for CMS and Audit Repository.
    Our database team now decided to change the CMS and Audit databases of Dev BOE from Oracle to  SQL server 2008.
    What is the ideal way to achieve this? I'm concerned because the old DB is Oracle and the new one would be SQL server.
    Earlier, I have changed CMS database from one to another by stopping SIA , taking the backup of old DB into new and changing it in the Update Databse option. But in that case both old and new CMS databses were on SQL server 2005.
    Thanks,
    Ganga

    Denise,
      Thanks for the solution.
    We have done Windows AD and SAP integration on the Dev BOE. Will there be any issue with those after the DB change. I am guessing there won't be, but just want to confirm. Please reply.
      Also, we need to stop the old SIA and start using the new SIA after the step two is done right?

  • Business Rule Repository has not been configured?

    Hi,
    In EAS trying to open Business Rules, I get "Repository has not been configured or you are not authorized to use Business Rules" (I get this with the admin creds)
    In details, "Error loading objects from Data Source", "SQLSyntaxExceptionError: ORA-00942 table or view does not exist"
    This is relatively new Production install running v. 11.1.2.1
    I ran into this after my LCM migration of a Planning app failed, unable to create business rules.
    IT folks tell me "the database looks fine" when I asked them about the ORA-00942 table.
    Can anyone help point me to a configuration step that might have been missed?
    Everything works in development environment, so if there's something to compare between the two, I'd appreciate any help.
    Thanks,

    Its necessary to do the following
    SE80 -> BSP Application -> Tunguska -> Pages with Flow Logic -> start_sts.htm   or  start_sts2.htm-> Right click & display in same window -> Properties tab -> Transfer Options -> HTTPS should be UNCHECKED.

  • Problem with FileSystem (CSV) to B1 QUTN scenario

    Hi Experts,
    I have a problem with FileSystem to B1 QUTN scenario. I did everything exactly like in a tutorial. After I copy a csv file into "In" folder I get Failure message in a Message Log. When I open that message I can see that sender Object has wrong column name:
    - <b1im:Payload ObjectTypeId="Z.F.AnySystem_WEB_B1QUTN" ObjectRole="S">
    - <file LocalObjectType="WebB1QutnFile" extension="csv" filename="WebB1QutnFile" filespec="c:\b1isn\QUTN\in\WebB1QutnFile.csv" encoding="ISO-8859-1" delchar=";" wrapchar="" xmlID="FirstTag" xmlpar="" csvID="FirstLine" csvpar="" txtID="" txtpar="" xpath="" offsetdef="" pltype="csv" ruledoc="/com.sap.b1i.datasync.directory/Ext.META.File/0010000109.regex.rules.xml">
    - <row>
      <col>WebB1QutnFile</col>
      </row>
    - <row>
      <col>OADAMS</col>
      <col />
      <col />
      <col />
      </row>
    - <row>
      <col />
      <col>001-0099</col>
      <col>3.000000</col>
      <col>3.150000</col>
      </row>
    - <row>
      <col />
      <col>001-0099</col>
      <col>4.000000</col>
      <col>1999</col>
      </row>
      </file>
      </b1im:Payload>
    My CSV file:
    WebB1QutnFile
    OADAMS;;;
    ;001-0099;3.000000;3.150000
    ;001-0099;4.000000;1999
    I changed FileInbound.xml to:
    <?xml version="1.0" encoding="UTF-8"?>
    <FileConfiguration xmlns="" direction="inbound">
         <SysId Id="0010000109">
              <pload/>
              <xml_object_identification>FirstTag</xml_object_identification>
              <xml_object_identification_par/>
              <csv_object_identification>FirstLine</csv_object_identification>
              <csv_object_identification_par/>
              <txt_object_identification/>
              <txt_object_identification_par/>
              <character_encoding>ISO-8859-1</character_encoding>
              <delimiter>,</delimiter>
              <wrapper/>
              <xpath/>
              <CSV>
                   <ObjectType id="Z.F.AnySystem_WEB_B1QUTN">
                        <field>Customer</field>
                        <field>Item</field>
                        <field>Quantity</field>
                        <field>Price</field>
                   </ObjectType>
              </CSV>
              <OFFSET/>
         </SysId>
    </FileConfiguration>
    can anyone tell me what is wrong?
    Regards
    Szymon

    Hi Szymon,
    this was not easy to see. In the FileInbound.xml for the tag ObjectType please change "id" into "Id":
    <CSV>
         <ObjectType Id="Z.F.AnySystem_WEB_B1QUTN">
              <field>Customer</field>
              <field>Item</field>
              <field>Quantity</field>
              <field>Price</field>
         </ObjectType>
    </CSV>
    Now the columns should get their names as expected.
    Best regards
    Bastian

  • Problem with creating repository

    I can´t create functional repository, because I can´t compile package bodies: ck_util, rmmac and jr_workarea. Problem is with parameter dbms_lock. For example when I want to compile jr_workarea, i get message:"
    Line # = 287 Column # = 49 Error Text = PLS-00221: 'NL_MODE' procedure doesn´t exist or isn´t declare
    Line # = 287 Column # = 49 Error Text = PL/SQL: ORA-00904: "DBMS_LOCK"."NL_MODE": wrong identificator
    Line # = 287 Column # = 4 Error Text = PL/SQL: SQL Statement ignored
    Line # = 298 Column # = 16 Error Text = PLS-00201: identificator 'DBMS_LOCK' must be declared
    Line # = 298 Column # = 4 Error Text = PL/SQL: Statement ignored."
    If I want to run designer, I get message:CDR-20043: Non-versioned repository has no workarea or insufficient privileges. What should I to do to run designer?? I have Oracle Designer 9i.
    Thanks

    Tom,
    the problem you have indicates that you missed important steps of the installation guide.
    During the Repository installation, you are asked to connect as SYS and execute some grants:
    grant execute on dbms_lock to repos_owner;
    grant execute on dbms_pipe to repos_owner;
    grant create table to repos_owner;
    grant select on sys.v_$nls_parameters to repos_owner with grant option;
    grant select on sys.v_$parameter to repos_owner;
    Please check the installation guide again.
    If nothing was already defined in the Repository, the eaiest may be to drop your repository and recreate it after having executed all steps from the Installation Guide.
    Hope this helps,
    Didier

  • Upgrade of database with GC repository resides

    I have GC 10.2.0.3 running with the repository stored in a 9.2.0.8 database.
    I would like to upgrade the database to 10.2.0.3 using dbua if possible. When the dbua sees an upgrade to 10.2 it creates a new SYSMAN schema, but I already have one that the Grid Control install created when I used the option "install into an existing database."
    I've searched MetaLink on how to do this, and created a SR but am having trouble getting support to understand what I'm attempting.
    I'm open to anything, create a new database, etc. The only thing I want to be sure of is to not loose the information that I've already established in Grid, and I'm assuming it's stored in the SYSMAN schema.

    Totally fresh clean 10gR2 database on a different host and platform.
    2.3 Export/Import
    If the source and destination database is non-10g, then export/import is the only option for cross platform database migration.
    For performance improvement of export/import, set higher value for BUFFER and RECORDLENGTH . Do not export to NFS as it will slow down the process considerably.
    Direct path can be used to increase performance. Note – As EM uses VPD, conventional mode will only be used by Oracle on tables where policy is defined.
    Also User running export should have EXEMPT ACCESS POLICY privilege to export all rows as that user is then exempt from VPD policy enforcement. SYS is always exempted from VPD or Oracle Label Security policy enforcement, regardless of the export mode, application, or utility that is used to extract data from the database.
    2.3.1 Prepare for Export/Import
    * Mgmt_metrics_raw partitions check
    select table_name,partitioning_type type,
    partition_count count, subpartitioning_type subtype from
    dba_part_tables where table_name = 'MGMT_METRICS_RAW'
    If MGMT_METRICS_RAW has more than 3276 partitions please see Bug 4376351 – This bug is fixed in 10.2. Old partitions should be dropped before export/import to avoid this issue – This will also speed up the export/import process.
    To drop old partitions - run exec emd_maintenance.partition_maintenance
    (This needs shutdown of OMS and set job_queue_processes to 0 during the time drop partitions is run) – Please refer to EM Performance Best practices document for more details on usage.
    Workaround to avoid bug 4376351 is to export mgmt_metrics_raw in conventional mode – This is needed only if drop partition is not run. Note - drop old partitions run is highly recommended.
    * Shutdown OMS instances and prepare for migration
    Shutdown OMS, set job queue_processes to 0 and remove dbms jobs using commands
    connect /as sysdba
    alter system set job_queue_processes=0;
    connect sysman/<password>
    @ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_remove_dbms_jobs.sql
    2.3.2 Export
    Before running export make sure that NLS_LANG variable is same as database characterset. For example after running this query
    SQL> select value from nls_database_parameters where PARAMETER='NLS_CHARACTERSET';
    VALUE
    WE8ISO8859P1
    Then NLS_LANG environment variable should be set to AMERICAN_ AMERICA. WE8ISO8859P1
    * Export data
    exp full=y constraints=n indexes=n compress=y file=fullem102_1.dmp log=fullem102exp_1.log
    Provide system username and password when prompted.
    Verify the log file and make sure that no characterset conversion happens (this line should not be present in log file “possible charset conversion”)
    * Export without data and with constraints
    exp full=y constraints=y indexes=y rows=n file=fullem102_2.dmp log=fullem102exp_2.log
    Provide system username and password when prompted
    2.3.3 Import
    Before running import make sure that NLS_LANG variable is same as database characterset.
    * Run RepManager to drop target repository (if target database has EM repository installed)
    cd ORACLE_HOME/ sysman/admin/emdrep/bin
    RepManager repository_host repository_port repository_SID -sys_password password_for_sys_account -action drop
    * Pre-create the tablespaces and the users in target database
    @ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_tablespaces.sql
    @ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_repos_user.sql
    @ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_pre_import.sql
    For first 2 scripts, we need to provide input arguments when prompted or you can provide them on command line for example
    @ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_tablespaces.sql MGMT_TABLESPACE <path>/mgmt.dbf <size of mgmt.dbf> <aotoextend size> MGMT_ECM_DEPOT_TS <path>/mgmt_ecm_depot1.dbf <size of mgmt_ecm_depot1.dbf> <aotoextend size> MGMT_TABLESPACE <path>/mgmt.dbf <size of mgmt.dbf> <aotoextend size>
    @/scratch/nagrawal/OracleHomes/oms10g/sysman/admin/emdrep/sql/core/latest/admin/admin_create_repos_user.sql sysman <sysman password> MGMT_TABLESPACE TEMP CENTRAL ON
    * Import data -
    imp constraints=n indexes=n FROMUSER=sysman TOUSER=sysman buffer=2097152 file=fullem102_1.dmp log=fullem102imp_1.log
    * Import without data and with constraints -
    imp constraints=y indexes=y FROMUSER=sysman TOUSER=sysman buffer=2097152 rows=n ignore=y file=fullem102_2.dmp log=fullem102imp_2.log
    Verify the log file and make sure that no characterset conversion happens (this line should not be present in log file “possible charset conversion”)
    2.3.4 Post Import EM Steps
    * Please refer to Section 3.1 for Post Migration EM Specific Steps
    3 Post Repository Migration Activities
    3.1 Post Migration EM Steps
    Following EM specific Steps should be carried out post migration -
    * Recompile all invalid objects in sysman schema using
    connect sysman/<password>
    @ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_recompile_invalid.sql
    * Run post plugin steps to recompile any invalids, create public synonyms, create other users, enable VPD policy, repin packages-
    connect sysman/<password>
    @ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_synonyms.sql
    @ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_post_import.sql
    Provide <ORACLE_HOME/sysman/admin/emdrep/sql for em_sql_root
    SYSMAN for em_repos_user
    MGMT_TABLESPACE for em_tablespace_name
    TEMP for em_temp_tablespace_name
    Note – The users created by admin_post_import will have same passwords as username.
    Check for invalid objects – compare source and destination schemas for any discrepancy in counts and invalids.
    * Following queues are not enabled after running admin_post_import.sql as per EM bug 6439035, enable them manually by running
    connect sysman/<password>
    exec DBMS_AQADM.START_QUEUE( queue_name=> 'MGMT_TASK_Q');
    exec DBMS_AQADM.START_QUEUE( queue_name=> 'MGMT_PAF_RESPONSE_Q');
    exec DBMS_AQADM.START_QUEUE( queue_name=> 'MGMT_PAF_REQUEST_Q');
    exec DBMS_AQADM.START_QUEUE( queue_name=> 'MGMT_LOADER_Q');
    * Please check for context using following query
    connect sysman/<password>
    select * from dba_context where SCHEMA='SYSMAN';
    If any of following context is missing create them using
    connect sysman/<password>
    create or replace context storage_context sing storage_ui_util_pkg;
    create or replace context em_user_context sing setemusercontext;
    * Partition management
    Check if necessary partitions are created so that OMS does not run into problems for loading into non-existent partitions (This problem can come only if there are gap of days between export and import) –
    exec emd_maintenance.analyze_emd_schema('SYSMAN');
    This will create all necessary partitions up to date.
    * Submit EM dbms jobs
    Reset back job_queue_processes to original value and resubmit EM dbms jobs
    connect /as sysdba
    alter system set job_queue_processes=10;
    connect sysman/<password>
    @ORACLE_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_submit_dbms_jobs.sql
    * Update OMS properties and startup OMS
    Update emoms.properties to reflect the migrated repository - oracle.sysman.eml.mntr.emdRepConnectDescriptor
    Update host name, port with the correct value and start the OMS.
    * Relocate “Management Services and Repository” target
    If “Management Services and repository” target needs to be migrated to the destination host, delete the old "Management Services and Repository" target. Add it again with same name "Management Services and Repository" on agent running on new machine.
    * Run following sql to verify the repository collections are enabled for emrep target
    SELECT
    target_name,
    metric_name,
    task.task_id,
    task.interval,
    task.error_message,
    trunc((mgmt_global.sysdate_utc-next_collection_timestamp )/1440) delay
    from mgmt_collection_metric_tasks mtask,
    mgmt_collection_tasks task,
    mgmt_metrics met,
    mgmt_targets tgt
    where met.metric_guid = mtask.metric_guid AND
    tgt.target_guid = mtask.target_guid AND
    mtask.task_id = task.task_id(+) AND
    met.source_type > 0 AND
    met.source != ' '
    AND tgt.target_type='oracle_emrep'
    ORDER BY mtask.task_id;
    This query should result same records in both source and destination database. If you find any of collections missing in destination database, run following to schedule them in destination database
    DECLARE
    traw RAW(16);
    tname VARCHAR2(256);
    ttype VARCHAR2(64);
    BEGIN
    SELECT target_name, target_type, target_guid
    INTO tname, ttype, traw
    FROM mgmt_targets
    WHERE target_type = 'oracle_emrep';
    mgmt_admin_data.add_emrep_collections(tname,ttype,traw);
    END;
    * Discover/relocate Database and database Listener targets
    Delete the old repository database target and listener and rediscover the target database and listener in EM

  • Repost: CMS with 'roles' or individual permissions

    Hi,
    I've read a few of the threads here about CMS but none of
    them seem to meet
    my question head on.
    I have a site which I have set up for Contribute for my
    client. They now
    want to be able to add a page for each of their branches to
    update and
    maintain, but do not want to allow them to edit other pages.
    They have
    around 16 branches, so they would be looking at 8 to 16
    copies of Contribute
    @ around $250 AUD.
    What would people's recommendations be? Should they just get
    the licences
    for Contribute as well as a small bill from me for setup?
    What about
    Contribute Publishing Server (if it still exists)? How does
    that compare
    pricewise? What about a CMS with 'roles' or permissions so
    that I can limit
    each branch to edit only pages in their folder? Another
    factor would be
    future branch increases - if they get another five branches
    soon they would
    have to pay for more Contribute licences, but a CMS would
    already be paid
    for and pretty much set up.
    Thanks for your wisdom in an area I know little about,
    Bruce

    Thanks SnakEyez,
    You're pretty much right with your conclusion of the
    permissions needed
    although I'll probably have to give access to an individual
    folder for each
    branch, rather than a page, but Contribute handles that just
    as well.
    I'll contact Adobe Australia Sales Dept but first I have to
    find out whether
    my client qualifies for either Education or Government since
    they are a
    Government affiliated Not-for-Profit who also does
    training...
    Thanks again,
    Bruce
    > If I am understanding your issue correctly the client
    will create only 1
    > page
    > per branch. These are not divisions which will have
    entirely different
    > websites (ie: take Ford Motor Company for example with
    Ford, Lincoln,
    > Mercury
    > websites).
    >
    > If this is only one page, do they have an IT department
    to look over it?
    > If
    > so they could designate a few individuals to update all
    of the pages to
    > cut
    > back on costs. Or, as an alternative you could stay on
    and help update
    > those
    > pages. As far as licensing goes though, Adobe has a
    pricing calculator (
    >
    http://www.adobe.com/aboutadobe/openoptions/calculator/
    ). I'm not too
    > sure
    > about pricing outside the US, but you would be in TLP
    level S. If you
    > contact
    > Adobe sales or a volume retailer (sorry but nothing
    comes to mind for
    > Australia) you should get more of a discount on the
    software when you buy
    > in
    > quantity.
    >
    > With regards to permissions, I do know that Contribute
    does allow you to
    > set
    > up permissions. These are typically based on login so I
    would assume that
    > you
    > would only have 1 login per branch. But how Contribute
    Publishing Server
    > works
    > I do not have the experience there to help you. For that
    I would highly
    > recommend contacting Adobe sales. If you want to find
    out if something is
    > for
    > you, calling a sales department.
    >

  • CMS with 'Roles' like Contribute

    Hi,
    I've read a few of the threads here about CMS but none of
    them seem to meet
    my question head on.
    I have a site which I have set up for Contribute for my
    client. They now
    want to be able to add a page for each of their branches to
    update and
    maintain, but do not want to allow them to edit other pages.
    They have
    around 16 branches, so they would be looking at 8 to 16
    copies of Contribute
    @ around $250 AUD.
    What would people's recommendations be? Should they just get
    the licences
    for Contribute as well as a small bill from me for setup?
    What about
    Contribute Publishing Server (if it still exists)? How does
    that compare
    pricewise? What about a CMS with 'roles' or permissions so
    that I can limit
    each branch to edit only pages in their folder? Another
    factor would be
    future branch increases - if they get another five branches
    soon they would
    have to pay for more Contribute licences, but a CMS would
    already be paid
    for and pretty much set up.
    Thanks for your wisdom in an area I know little about,
    Bruce

    Thanks SnakEyez,
    You're pretty much right with your conclusion of the
    permissions needed
    although I'll probably have to give access to an individual
    folder for each
    branch, rather than a page, but Contribute handles that just
    as well.
    I'll contact Adobe Australia Sales Dept but first I have to
    find out whether
    my client qualifies for either Education or Government since
    they are a
    Government affiliated Not-for-Profit who also does
    training...
    Thanks again,
    Bruce
    > If I am understanding your issue correctly the client
    will create only 1
    > page
    > per branch. These are not divisions which will have
    entirely different
    > websites (ie: take Ford Motor Company for example with
    Ford, Lincoln,
    > Mercury
    > websites).
    >
    > If this is only one page, do they have an IT department
    to look over it?
    > If
    > so they could designate a few individuals to update all
    of the pages to
    > cut
    > back on costs. Or, as an alternative you could stay on
    and help update
    > those
    > pages. As far as licensing goes though, Adobe has a
    pricing calculator (
    >
    http://www.adobe.com/aboutadobe/openoptions/calculator/
    ). I'm not too
    > sure
    > about pricing outside the US, but you would be in TLP
    level S. If you
    > contact
    > Adobe sales or a volume retailer (sorry but nothing
    comes to mind for
    > Australia) you should get more of a discount on the
    software when you buy
    > in
    > quantity.
    >
    > With regards to permissions, I do know that Contribute
    does allow you to
    > set
    > up permissions. These are typically based on login so I
    would assume that
    > you
    > would only have 1 login per branch. But how Contribute
    Publishing Server
    > works
    > I do not have the experience there to help you. For that
    I would highly
    > recommend contacting Adobe sales. If you want to find
    out if something is
    > for
    > you, calling a sales department.
    >

  • Old DLL made for old card with filesystem - Help!

    Hi,
    I have an old DLL that is used to access an old smartcard with file system. The DLL is using standard ISO calls like Select, Read etc. from these files. I am now finishing an applet with filesystem created as byte arrays which works fine BUT. How can I make these files visible to the DLL? Are the old ISO filesystem calls still there or do I catch the these APDU commands and handle them my self within the applet?
    And, where can I read about these calls and whether they still work on "pure java cards"?
    Pls help :-)

    Thing is, I have en old DLL that has been in use for many years! Card is no longer available so now switching to java platform. DLL must still be able to communicate with card for "file access". I have not got a clue what commands the DLL sends but know what the data looks like. This is the reason for looking at what the DLL does. When I know, I am asking: Can I handle the APDU commands from this DLL that before went to the file system directly, in my applet? AND to learn more about the old file handeling, where can I read about those APDU commands, such as select write & read?

  • SessionFacade::openSessionLogon with user info has failed

    Post Author: kpit
    CA Forum: Authentication
    I cannot logon to system from my client applicaitons...DeskI and Designer using 2 tier mode.  However, I can logon to system using 3 tier mode.
    Does anyone knows anything about this issue?  I have reinstalled BOE XI R2 SP2 with FP 2.6 and reinstalled my client applications as well, but still have this issue. 
    Error Message:  &#91;repo_proxy 13&#93; SessionFacade::openSessionLogon with user info has failed(CMS host 'KTAZD429' cannot be found on the network. Please verify the name and that network name resolution is working properly.(hr=#0x80040801)

    Post Author: jsanzone
    CA Forum: Authentication
    kpit,
    The good ol' "repo_proxy 13" error has reared its ugly head at you, huh?
    In my experience "repo_proxy 13" can be caused by one of several symptoms, so it's going to be a case of try this or try that before you get it resolved.  My last experience with "repo_proxy 13" was that I had a server running on a stand-alone network with no DNS, Firewall, or connection to the outside Internet (basically a lab), and when I brought other workstations/laptops in and try to connect, the "repo_proxy 13" would appear.  I finally resolved this by hard-coding the IP address on all machines under the 192.168.1.xxx subnet, and voila, the problem went away.  Now this may not apply in your situation, but it might.  Just some thoughts, but no guarantees on resolution.

  • Internet Download error message: the connection with the server has been reset

    With an WRT54GS v6 firmware v 1.50.9 on Windows XP SP2 with Internet Explorer 7, Intel wireless internal adaptor w/ 802.11b,  I was getting the error message "The connection with the server has been reset" when downloading most anything from the Internet.  My IP address would then be reset and download would be interrupted.  To resolve, I turned off the WZC (Wireless Zero Configuration) Services, and this seems to correct the problem.
    Control Panel > Admin Tools > Services > Wireless Zero Configuration > Disable
    Hopes this helps.

    Hi!
    Im using Windows 7.... I tried looking for the Wireless Config - but it isn't in the Services options!
    So, what can I do now?
    Thanx

Maybe you are looking for