Migrating from OLAP to relational

Hi,
I am supposed to work on an assignment that needs to migrate the data model from OLAP to relational. Generally the industry is moving the other way, i.e. from Relational to OLAP.
In my current assignment I have to re-rengineer an existing reporting that is carried out of Oracle Sales Analyzer on top of Oracle Express Server.
In the modified environment we are supposed to use Oracle 10g Standard Edition licenses from the customer, which (as far as I know) do not have the OLAP option.
So we have to come up with a Datamart that has to be in relational (star schema) form.Please correct me if there is any other possible solution also.
I believe some of you here would have certainly got a solution to my problem, that is,1) how to efficiently migrate the OES data into relational model. We have to report on Oracle Discoverer 10g, and ETL would be Oracle Warehouse Builder.
2) Is there a way to reuse the metadata for ETL or reporting purposes. I have a vague idea about the whole exercise, in which, I'll manually dig out physical table level details from OES setup and then pull those attributes into the star schema kind of model tables. But maintaining hierarchies and so many levels will be another problem in relational structures.
Please suggest.
Thanks
Goga

Unfortunately, OWB cannot use an Express database as a source. However, there is some good news. If you are using Warehouse Builder 10gR2 there is an new Expert available from download from the OWB OTN website that uses all the old OSA metadata files to create a completely new project with source and target modules. The source modules are the original dimension and variable flat files.
Because the model that is generated is a logical model you can implement it as either a relational or multi-dimensional schema (assuming you have not tried to use any features specific to a multi-dimensional model such as value based hierarchies or non-additive measures).
Goto to the OWB Exchange:
http://www.oracle.com/technology/products/warehouse/htdocs/OWBexchange.html?cat=EXP&submit=Search
Search for "Experts"
and the list one listed is:
OSA Migration Expert: Uses the OSA Global metadata files to allow an initial migration example of an OSA system.
Obviously this is not a supported utility so you are on your own, but hopefully this will at least provide you with a good starting point for creating your relational schema.
Hope this helps
Keith

Similar Messages

  • Problem in drilling out to relational from OLAP worksheet

    Hi,
    I am using level based hierarchy for my dimensions and trying drill out to relation worksheet from OLAP worksheet.
    The problem I am facing is that the way OLAP stores data is always sufixed with the "_LEVELNAME". Now when passing parameters from OLAP we have two option (i) pass the label (ii) pass the value
    a) We cannot pass the value as the value will be appended with the _levelname. and will not match to the relational data.
    b) On the disco report it displays the long description which in our case is code_description value which is again not same data as stored in relational table.
    My question is how can I make the short description visible in the report? Excel addin has feature through which you can either show long or short description.
    Your help is much appreciated.
    Thanks
    Brijesh

    Issue resloved now.
    Thanks
    Brijesh

  • Drill-Trough from Disco OLAP to relational without beeing prompted for pwd?

    Hello,
    we are developing a showcase for Discoverer for OLAP on a laptop. We also would like to drill-throug from an OLAP-Report to a relational plus (or viewer) worksheet. We do this like Marc Rittman in his article: http://www.rittman.net/archives/001252.html
    So when we do the drill-through, we get to the login-page and have to provide our password. Can somebody imagine a way to get directly to the worksheet without the login-page beeing shown?
    For it is only a showcase, it should rather be fast then secure.
    Thank you,
    Tobias

    Hi Tobias
    You must have not chosen the Use Connection ID radio button when you created the link using the Specify Worksheet dialog box. When you did this you were asked for the username, database and EUL to connect to but you were not asked for the password. Thus, when you execute the drill you are prompted to supply it.
    However, if you were using Discoverer connected to an Infrastructure and have created public connections you can use the id of a public connection and enter it into the Use Connection ID box. This time, because the passwords of public connections are not known to the end users, the link should execute.
    I know this is not what you wanted to hear but it's the way Discoverer interface between OLAP and Relational works I'm afraid.
    Hope this helps
    Regards
    Michael Armstrong-Smith
    URL: http://learndiscoverer.com
    Blog: http://learndiscoverer.blogspot.com

  • Migration from 1 domain to another (in relation to Exchange)

    Hi, hoping someone can give me an opinion on the best way to carry out this task as it's quite a big project. I've read various articles on technet and other blogs, but I'm not sure if this is even possible with the versions of Exchange involved. Details
    below:
    We are carrying out a domain centralisation project - merging from an external forest (Domain B for example purposes) into another Forest (target Domain A). I will be using ADMT to carry out the account, group migrations etc and will include SID history
    when doing so. 
    There is a two way external trust between the domains.
    Domain B (source domain) has it's own Exchange 2010 environment. The target domain is on Exchange 2003 SP2. 
    Once I have migrated the accounts, is there any Exchange tools or methods for me to ensure that the migrated accounts can still access their Exchange mailboxes (with the domain trust still in place)? 
    As mentioned earlier I will be migrating SID history to ensure no disruption with access to resources, I have read that this would also be needed to ensure users can still access their original mailboxes in the source domain. Eventually we will be migrating
    the mailboxes from old domain to new but this is phase 2 of the project (either by PST import or however this can be done, that will need to be thought about later...).
    Any advice on how I can make sure the migrated users can still access their mailboxes and what I need to do, would be much appreciated.
    Thanks,
    Sarah

    If the target forest is running Exchange 2003, you aren't 100% wrong in saying it may not be possible. If you are migrating from Exchange 2010 to a new forest, you need to be migrating into an Exchange 2010 organization - downlevel migrations aren't supported,
    to the best of my knowledge. Here are the links supporting Exchange forest migrations, and they specifically say they are for migrations into an Exchange 2010 organization:
    http://blogs.technet.com/b/meamcs/archive/2011/06/10/exchange-2010-cross-forest-migration-step-by-step-guide-part-i.aspx
    https://gallery.technet.microsoft.com/office/Cross-Forest-Migration-7443029c
    http://blogs.technet.com/b/exchange/archive/2010/08/10/3410619.aspx
    http://technet.microsoft.com/en-us/library/dd876952.aspx
    I'm pretty sure that the tools used won't allow you to migrate from Exchange 2010 into Exchange 2003.  I'm sure you can kludge together a migration process that exports mailboxes to PST, then imports them into the target mailboxes, but that would be
    a painful process.

  • Error ORA-24247 after migrating from 10g to 11g

    Hi all,
    After a migration from a 10.2.0.3 (32bits) database to a 11.2.0.3 (64bits) database, we are facing a problem related to the UTL_SMTP package. I vale already created a ACL, as you can see below:
    -- create acl
    BEGIN
        DBMS_NETWORK_ACL_ADMIN.CREATE_ACL (acl => 'user_processos.xml',
                                                                         description => 'abc',
                                                                    principal => 'PROCES',
                                                                    is_grant => TRUE,
                                                                    privilege => 'connect');
    END; 
    -- assign acl
    BEGIN
        DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL (acl => 'user_processos.xml'
                                                                    host => 'rac-abc',
                                                                 lower_port => 1521,
                                                                 upper_port => NULL);
              END;The problem is: After we try to execute a procedure that calls UTL_SMTP, with user PROCES the error ORA-24247 is raised. I did some research and they all point to the it is necessary to create an ACL to solve this probles, but the ACL already exists, as you can see above.

    Hi,
    I had the same issue, well multiple issues, with this at first. You should find the solution in one of these links. Make sure you have done each step in the lists and it will work.
    [url http://www.ora00600.com/scripts/databaseconfig/ORA-29278.html]ORA-29278 SMTP
    [url http://www.ora00600.com/scripts/11g/UTL_SMTP_ORA-06512.html]ORA-06512: at SYS.UTL_SMTP
    There are various parameters and configuration steps you need to make sure you have performed for it to work.
    Hopefully that helps,
    Rob
    Edited by: Rob_J on Feb 15, 2013 11:53 AM
    *link was not working                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Difference in behavior of sql and pl/sql after migrating from 9i to 11g

    after migrating our database from oracle 9i to oracle11g, the developpers are worry that the behavior of the queries and pl/sql procedures/functions will change
    example :
    in 9i, select salary,count(*) from emp group by salary, it will display the rows sorted by salary
    in 11gi, select salary,count(*) from emp group by salary, it will display the rows not sorted by default, we have to add the clause order by salary.
    somebody could give the list of other difference in behavior (SQL and PL/SQL) agter migrating from 9i to 11g
    Thanks a lot.

    Tell your developers: garbage in - garbage out. In relational databases only ORDER BY ensures row order. If your developers relied on GROUP BY implemented by SORT and therefore returning ordered rows they had to realize code they wrote is Oracle release dependent and sooner or later code would require changes. And that "sooner or later" is now reality. In newer versions ORACLE can do GROUP BY via SORT or via HASH. And if it is done via HASH - don't expect ordered results. So tell your developers "payback time".
    SY.

  • Migrating from MSSQL 2000 to Oracle8i

    During the capturing I get the error message " Failed to insert row into table: SS2K_SYSCOLUMNS; ORA-01400". Is there something I can do to avoid this? One more question. After the oracle model has been created, there is no "User defined functions" catalog. I don't know if this is related to my first problem or if function migration is not included in the MSSQL2000 beta plugin?

    Hi,
    Try the following URL for a potential solution
    http://otn.oracle.com/support/tech/migration/workbench/htdocs/bulletins/sqlserver_01.htm
    We have seen this issue before when a user is trying to migrate a SQL Server database that was migrated from one SQL Server to another SQL Server. This means that the user information in the database is not bound to the master database anymore.
    The SQL Server sp_changedbowner procedure should provide a workaround.
    John

  • Migration from MySql to Derby

    I am currently migrating from MySql to Derby.
    I have been developing a MRPII package for a while now and whilst my classes worked with MySql I am finding difficulty with implementing them using Derby.
    I have developed a GUI interface for MRPII functions and use a Database Interface class to connect to the database.
    The Database Interface class gives me the capability of reading and writing from a database.
    I extend this capability to sub classes ie part, customers, suppliers, work orders.
    The problem I have is with the ' "+standardCost+" ' type statements. In MySql, the database engine recognises that the standard cost variable is a double though using Derby I get the following Errors.
    SQL Exception: Columns of type 'DOUBLE' cannot hold values of type 'CHAR'. In Save Data error code30000
    SQL Exception: Columns of type 'DOUBLE' cannot hold values of type 'CHAR'. In Save Data error code30000
    Executed statement INSERT INTO stock VALUES('BUDGET500','A1', '100.0','50.0','1.0','100.0')
    Executed statement INSERT INTO part VALUES('BUDGET500', 'PENTIUM','76.0', '86.0','EACH','MP','7','33')
    The only solution I see is giving each subclass of Database Interface the individual capability within that class to connect to the database and use prepared statements.
    Does anyone know if Derby has the capability of processing the updates using a similar format as shown in my code.
    package delta.databaseInterface;
    import delta.databaseInterface.DatabaseInterface;
    public class Part extends DatabaseInterface{
         private String partNumber;
         private String partType;
         private String description;
         private double standardCost;
         private double sellingPrice;
         private String stockLocation;
         private double quantityOnHand;
         private String unitOfMeasure;
         private double minimumStock;
         private double minimumOrderQuantity;
         private int leadTime;
         private double avaliableStock;
                         private int commodityCode;
         public void setPartNumber(String pn){
              this.partNumber = pn.toUpperCase();
         public void setDescription(String ds){
              this.description = ds.toUpperCase();
         public void setStandardCost(double sc){
              this.standardCost = sc;
         public void setSellingPrice(double sp){
              this.sellingPrice = sp;
         public void setQuantityOnHand(double qt){
              this.quantityOnHand = qt;
              this.avaliableStock = qt;
         public void setStockLocation(String sl){
              this.stockLocation = sl.toUpperCase();
         public void setUnitOfMeasure(String uom){
              this.unitOfMeasure = uom.toUpperCase();
         public void setTypeOfPart(String top){
              this.partType = top.toUpperCase();
         public void setMinimumStock(double ms){
              this.minimumStock= ms;
         public void setMinimumOrderQuantity(double moq){
              this.minimumOrderQuantity = moq;
         public void setLeadTime(int lt){
              this.leadTime = lt;
                         public void setCommodityCode(int cc){
              this.commodityCode = cc;
         public void insertIntoPartDatabase(){
           super.setDatabase("mrpii");
                           super.setUpdate( "INSERT INTO part VALUES('"+partNumber+"', '"+description+"','"+standardCost+"'," +
           " '"+sellingPrice+"','"+unitOfMeasure+"','"+partType+"','"+leadTime+"','"+commodityCode+"')");
                             super.saveData();       
         public void insertIntoStockDatabase(){
            super.setDatabase("mrpii");
            super.setUpdate("INSERT INTO stock VALUES('"+partNumber+"','"+stockLocation+"',"+
    " '"+quantityOnHand+"','"+minimumStock+"','"+minimumOrderQuantity+"','"+avaliableStock+"')");
                    super.saveData();
         public void changePartDetails(){
            super.setDatabase("mrpii");
            super.setUpdate( "UPDATE part SET description = '"+description+"',unitOfMeasure = '"+unitOfMeasure+"',"+
              "partType = '"+partType+"' WHERE partNumber = '"+partNumber+"' ");
            super.saveData();
         public void adjustStock(){
              super.setDatabase("mrpii");
              super.setUpdate( "UPDATE stock SET stockLocation = '"+stockLocation+"',"+
              "avaliableStock = (('"+quantityOnHand+"' - qtyOnHand ) + avaliableStock), "+
              " qtyOnHand = '"+quantityOnHand+"' WHERE partNumber = '"+partNumber+"' ");
              super.saveData();
         public void insertAllocation(String allocationID, double quantity){
              super.setDatabase("mrpii");
              super.setQuery("INSERT INTO allocations VALUES('"+partNumber+"','"+quantity+"',"+
                 " '"+allocationID+"')");
              super.saveData();
         public void reduceAvaliable(double allocated){
              super.setDatabase("mrpii");
              super.setUpdate( "UPDATE stock SET avaliableStock = (avaliableStock - '"+allocated+"') WHERE " +
              "partNumber = '"+partNumber+"' ");
              super.saveData();
         public void increaseAvaliable(double allocated){
              super.setDatabase("mrpii");
              super.setUpdate( "UPDATE stock SET avaliableStock = (avaliableStock - '"+allocated+"') WHERE " +
              "partNumber = '"+partNumber+"' ");
              super.saveData();
    }Any help would be deeply appreciated
    Thanks
    Jim

    I am currently migrating from MySql to Derby. Dear God, why?I am using the embedded functionality.
    >
    I have been developing a MRPII package for a whilenow
    MRP as in "manufacturing resource planning"?Yes
    >
    and whilst my classes worked with MySql I am
    finding difficulty with implementing them using
    Derby.That suggests that your Java code relies too heavily
    on MySQL features. You didn't make your code
    portable enough.
    I have developed a GUI interface for MRPIIfunctions
    and use a Database Interface class to connect tothe
    database.
    The Database Interface class gives me thecapability
    of reading and writing from a database.
    I extend this capability to sub classes ie part,
    customers, suppliers, work orders.
    The problem I have is with the ' "+standardCost+"'
    type statements. In MySql, the database engine
    recognises that the standard cost variable is a
    double though using Derby I get the following
    Errors.Right - you're counting on something that might not
    be true for all databases. MySQL appears to be doing
    an implicit conversion from string to double. Bad
    idea.
    SQL Exception: Columns of type 'DOUBLE' cannothold
    values of type 'CHAR'. In Save Data errorcode30000
    SQL Exception: Columns of type 'DOUBLE' cannothold
    values of type 'CHAR'. In Save Data errorcode30000
    Executed statement INSERT INTO stock
    VALUES('BUDGET500','A1',
    '100.0','50.0','1.0','100.0')
    Executed statement INSERT INTO part
    VALUES('BUDGET500', 'PENTIUM','76.0',
    '86.0','EACH','MP','7','33')Do you use PreparedStatements to escape strings and
    dates for you? If not, you're in for a bumpy ride.
    The only solution I see is giving each subclass of
    Database Interface the individual capabilitywithin
    that class to connect to the database and use
    prepared statements.I would not have a single database interface. Each
    class has its own requirements. How can a single
    class know about all of them? Better to break your
    persistence layer into several interaces, one per
    persistent class.That is what I thought
    >
    Does anyone know if Derby has the capability of
    processing the updates using a similar format as
    shown in my code.I think your code is in trouble, Jim.Probably so, thought a redesign is not so much of a problem now as I am a bit more experienced than when I started out.
    >
    You don't use prepared statements, which would help
    with your portability issues.
    You have SQL embedded in the objects. I'd move it
    out into a separate persistence layer.
    You might want to read about Hibernate, an
    object/relational mapping layer. You've got objects
    and tables. Hibernate tools or Middlegen can
    generate the XML mapping files for you. Once you
    have those, you'll find that Hibernate will make
    porting to another database a lot easier. It can be
    as easy as changing configuration files.
    I'd also look into Spring. It has some great
    plumbing to help you deal with databases and lots of
    other stuff.
    %My main class is as follows;
    package delta.databaseInterface;
    * @author James Charles
    import java.sql.SQLException;
    import java.sql.Connection;
    import java.sql.DriverManager;
    import java.sql.Statement;
    import java.sql.ResultSet;
    import java.sql.ResultSetMetaData;
    import java.util.Vector;
    import javax.swing.JTable;
    public class DatabaseInterface{
         private String database;
         private String query;
                         private String update;
                private String[][] results = new String[0][0];
         private int columnCount = 0;
         private int rowCount = 0;
         private boolean resultsExist = false;
         Object data[][];
         private JTable table;
         public void setDatabase(String database){
              this.database = database;
         public void setQuery(String query){
              this.query = query;
                public void setUpdate(String update){
              this.update= update;
            public void saveData(){
              try{
              Class.forName("org.apache.derby.jdbc.EmbeddedDriver").newInstance();
              String sourceURL = new String("jdbc:derby:" + this.database);
              Connection databaseConnection = DriverManager.getConnection(sourceURL);
              Statement statement = databaseConnection.createStatement();
              statement. executeUpdate(this.update);
                    databaseConnection.close();
              } // end of try
              catch(ClassNotFoundException cnfe){
                   System.err.println(cnfe);
              catch(SQLException sqle){
                   System.err.println(sqle + " In Save Data error code" + sqle.getErrorCode());
                             System.out.println("Executed statement " + this.query);
                    catch(InstantiationException ie){
                        System.err.println(ie);
             catch(IllegalAccessException iae){
                        System.err.println(iae);
         public String getDatabase(){
              return this.database;
         public String getQuery(){
              return this.query;
         public void setResults(){           
         try{
              Class.forName("org.apache.derby.jdbc.EmbeddedDriver").newInstance();
              String sourceURL = new String("jdbc:derby:" + this.database);
              Connection databaseConnection = DriverManager.getConnection(sourceURL);
              Statement statement = databaseConnection.createStatement();
              ResultSet thisResult = statement.executeQuery(this.query);
                    databaseConnection.commit();
              System.out.println("Executed statement");
              convertToArray(thisResult);
              databaseConnection.close();
         } // end of try
         catch(ClassNotFoundException cnfe){
              System.err.println(cnfe);
         catch(SQLException sqle){
              System.err.println(sqle);
         catch(InstantiationException ie){
                        System.err.println(ie);
             catch(IllegalAccessException iae){
                        System.err.println(iae);
         public void convertToArray(ResultSet resultsIn) throws SQLException{
              this.resultsExist = false;
              String[] columnNames = new String[0];
              Vector dataRows = new Vector();
              ResultSetMetaData metadata = resultsIn.getMetaData();
              this.columnCount = metadata.getColumnCount();
              columnNames = new String[this.columnCount];
              for(int i = 0; i < this.columnCount; i++)
                   columnNames[i] = metadata.getColumnLabel(i+1);
              String[] rowData = new String[this.columnCount]; // Stores one row
              while(resultsIn.next()){ // For each row...
                   rowData = new String[this.columnCount]; // create array to hold thedata
                        for(int i = 0; i < this.columnCount; i++)// For each column
                        rowData[i] = resultsIn.getString(i+1); // retrieve the data item
                        dataRows.addElement(rowData); // Store the row in the vector
              this.results= new String[dataRows.size()][this.columnCount];
              this.data= new Object[dataRows.size()][this.columnCount];
              for(int column = 0; column < this.columnCount; column++)
                   for(int row = 0; row < dataRows.size(); row++){
              this.results[row][column] = ((String[])(dataRows.elementAt(row)))[column];
              this.data[row][column] = ((String[])(dataRows.elementAt(row)))[column];
              this.rowCount = dataRows.size();
              try{
                   if  (results[0][0].equals(null)){ this.resultsExist = false;} //ckecks results
                        else {this.resultsExist = true;}
                   catch (ArrayIndexOutOfBoundsException e){
                   System.out.println(e + "in catch");
                   this.resultsExist = false;
                   System.out.println("Value of exists in catch = " + this.resultsExist);
         }// end of set results
         public JTable getTable(Object[] columnNames) {
              table = new JTable(data, columnNames);
              return table;
         public int getColumnCount(){
              return this.columnCount;
         public int getRowCount(){
              return this.rowCount;
         public String getData(int row, int column){
              return this.results[row][column];
         public boolean doResultsExist(){
              return this.resultsExist;
    } // end of database InterfaceI developed this code about a year ago and maybe it is time to simplify it.
    With MySql I could create an instance of Database Interface or use one of its subclasses and perform any function on the database by calling its methods.
    maybe it is time for a redesign!
    thanks
    jim

  • Mail "unexpectedly quits" after migration from snow leopard to new iMac running Mountain Lion

    Mail "unexpectedly quits" after migration from snow leopard time machine files to new iMac running Mountain Lion.  I can run connection doctor OK.  But the activity window is blank.   If you try to open message viewer window then mail program crashes.  Would really like to get my old emails back as the old imac is totally dead.  Thanks for any help. 

    Launch the Console application in any of the following ways:
    ☞ Enter the first few letters of its name into a Spotlight search. Select it in the results (it should be at the top.)
    ☞ In the Finder, select Go ▹ Utilities from the menu bar, or press the key combination shift-command-U. The application is in the folder that opens.
    ☞ Open LaunchPad. Click Utilities, then Console in the icon grid.
    Step 1
    Make sure the title of the Console window is All Messages. If it isn't, select All Messages from the SYSTEM LOG QUERIES menu on the left.
    Enter the name of the crashed application or process in the Filter text field. Post the messages from the time of the last crash, if any — the text, please, not a screenshot. 
    When posting a log extract, be selective. In most cases, a few dozen lines are more than enough.
    Please do not indiscriminately dump thousands of lines from the log into a message.
    Important: Some private information, such as your name, may appear in the log. Edit it out by search-and-replace in a text editor before posting.
    Step 2
    Still in the Console window, look under User Diagnostic Reports for crash reports related to the process. The report name starts with the name of the crashed process, and ends with ".crash". Select the most recent report and post the entire contents — again, the text, not a screenshot. In the interest of privacy, I suggest that, before posting, you edit out the “Anonymous UUID,” a long string of letters, numbers, and dashes in the header of the report, if it’s present (it may not be.) Please don’t post shutdownStall, spin, or hang logs — they're very long and not helpful.

  • Migration from OAS 10g to Weblogic 10.3

    Hello,
    I migrate large app from OAS 10g to Weblogic 10.3.
    1.
    The main part of the job was to prepare descriptors for Weblogic. Unfortunatelly I don't find any tool that could do the job:( There are some problems with descriptors namespaces. This schema and namespaces given in 10.3 docs are not working (not available):
    http://edocs.bea.com/wls/docs103/ejb/DD_defs_reference.html
    So I use the one from 10 release in weblogic-ejb-jar.xml:
    weblogic-ejb-jar xmlns="http://www.bea.com/ns/weblogic/10.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.bea.com/ns/weblogic/10.0 http://www.bea.com/ns/weblogic/10.0/weblogic-ejb-jar.xsd"&gt;
    and form 9 release in weblogic-cmp-rdbms-jar.xml:
    &lt;weblogic-rdbms-jar xmlns="http://www.bea.com/ns/weblogic/90" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.bea.com/ns/weblogic/90 http://www.bea.com/ns/weblogic/90/weblogic-rdbms20-persistence.xsd"&gt;
    There are also some bugs in docs about ejb relations.
    After fixing some schema compliance exc this step success.
    2. Next the EJBComplianceChecker - its much more restrictive than OAS verification. So updates in ejb interfaces are necessary. It is not a problem with small app but when there ale a lot of code/branches to migrate it's become a problem. I have been looking for some switch that could help with this but with no result/
    EJBComplianceChecker - Spec veryfication level
    3. Now after EJB compliance checker done its job with success I have an exception that I do not understand:
    An error occurred during activation of changes, please see the log for details.
    Exception
    preparing module: EJBModule(corpo_ejb.jar)
    Unable to deploy EJB: corpo_ejb.jar from corpo_ejb.jar:
    There are 1 nested errors:
    java.io.IOException: JDT compilation error! at
    weblogic.ejb.container.ejbc.CompilerForJDT.compile(CompilerForJDT.java:66)
    at
    weblogic.ejb.container.ejbc.EJBCompiler.doCompile(EJBCompiler.java:357)
    at
    JDT compilation error!
    Could you please give me some point where the problem could be? I don't have any idea where to start looking for..
    What are your experiences with migrations from OAS to Weblogic 10?
    Thanks in advance!
    Edited by: Stoigniew Sztank on Oct 10, 2008 4:00 AM
    Edited by: Stoigniew Sztank on Oct 10, 2008 4:01 AM
    Edited by: Stoigniew Sztank on Oct 10, 2008 4:02 AM
    Edited by: Stoigniew Sztank on Oct 10, 2008 4:04 AM
    Edited by: Stoigniew Sztank on Oct 10, 2008 8:05 AM

    Hi Stoigniew Sztank,
    I am working on migrating Enterpirse application developed using Struts, Ejb2.0, JMS. Its been deployed on OAS 10G and Websphere, but I need to deploy the application on Weblogic 10.3. It seems you have migrated a J2ee application from OAS 10G to Weblogic 10.3. Please can you list me the steps that you followed to migrate the application.
    As per my understanding follwing things needsto be taken care of:-
    1) Weblogic Descriptor files:
    1.1 Weblogic.xml:- we added security roles and ejb-reference-description for the ejbs.
    1.2 Weblogic-ejb-jar.xml for all the ejbs used in the application.
    1.3 Weblogic-application.xml
    1.4 Resource Adapter
    2) JMS queue set up
    3) JDBC set up
    It would be a great help if you can let me know what are the steps to migrate the application.
    Thanks and Regards
    Deepak Dani

  • Migration from ATG Dynamo 5.1 to Weblogic 10.3 Server

    Hi,
    We are currently involved in a project in which the application has be
    migrated from the ATG Dynamo Server 5.1 to Weblogic 10.3 application
    server.
    While doing this we are facing some problems related to
    DynamoHttpServletRequest.
    Previosly the code was written as below:
    DynamoHttpServletRequest atgRequest = (DynamoHttpServletRequest)
    request;
    HttpSession atgSession = (HttpSession) atgRequest.resolveName("/" );
    Since we are moving to the weblogic server we are planning to remove
    the DynamoHttpServletRequest and use the javax HttpServletRequest.
    But to do this i am unable to find the functionality that does
    resolveName("/" ).
    I tried writing the below 2 lines of code inplace of the 2 lines of
    code above, but it is not solving the purpose.
    HttpServletRequest weblogicRequest = (HttpServletRequest) request;
    HttpSession weblogicSession = (HttpSession)
    weblogicRequest.getSession();
    Can anybody please help me in solving this problem where i can replace
    DynamoHttpServletRequest with HttpServletRequest.
    Any suggestions are welcome.
    Thanks in advance
    Shailendra

    Hi Ravi,
    Thanks for your response.
    While migrating we have tried to remove the ATG dependency and we are getting session invalid. We are unable to find a similar method for “resolveName” in HTTPServletRequest. The method “resolveName” of the DynamoHttpServletRequest is being used to retrieve the session object by passing a String parameter (‘/’ or any URL).
    Whereas the HTTPServletRequest has only two methods [getSession() or getSession(Boolean)] to retrieve the session object and here we cannot pass any String parameter. Thus we are unable the replicate the existing functionality using the HTTPServletRequest class.
    Below are some code examples for your reference.
    Sample1:
    Below commented code is original one. we tried to as mentined below:
    //import atg.servlet.DynamoHttpServletRequest;
    /* DynamoHttpServletRequest atgRequest = (DynamoHttpServletRequest) request;
    HttpSession atgSession = (HttpSession) atgRequest.resolveName("/" );
    String atgsession = "jsessionid=" + atgSession.getId();
    HttpServletRequest weblogicRequest = (HttpServletRequest) request;
    HttpSession weblogicSession = (HttpSession) weblogicRequest.getSession();
    String weblogicsession = "jsessionid=" + weblogicSession.getId();
    Sample2:
    Below commented code is original one. we tried to as mentined below:
    /* ATG Dynamo for Session Last Access Time functionality */
    //import atg.servlet.DynamoHttpServletRequest;
    /* DynamoHttpServletRequest atgRequest = (DynamoHttpServletRequest) request;
    HttpSession session = (HttpSession) atgRequest.resolveName("/atg/dynamo/servlet/sessiontracking/SessionManager/" + sessionID);
    HttpServletRequest weblogicRequest = (HttpServletRequest) request;
    HttpSession session = (HttpSession) weblogicRequest.getSession();
    we would like to know is there any smilar functionality or methods available so we can use in HttpServletRequest or servlets.
    Thanks,
    Nagesh

  • Data Migration From Peoplesoft , JDEdwards To SAP.

    Hi,
    This is kiran here we are doing data Migration work from Peoplesoft And JDEdwards to SAP.in SAP side it involves Master data tables Related to Customer, Vendor, Material. and Meta data tables related to SD, MM, FI. We as SAP Consultant identified Fields from above tables and marked them as Required, Not required, And Mandatory. The Peoplesoft and JDEdwards flocks come up with the same from their side. Then we want map the Fields. as I am new to data Migration any body suggest me what are the steps involves in data Migration How to do Data Mapping in Migration Thanks in advance.
    Thanks
    Kiran.B

    Hi Kiran,
    Good... Check out the following documentation and links
    Migrating from one ERP solution to another is a very complex undertaking. I don't think I would start with comparing data structures. It would be better to understand the business flows you have currently with any unique customizations and determine how these could be implemented in your target ERP. Once this is in place, you can determine the necessary data unload/reload to seed your target system.
    A real configuration of an ERP system will only happen when there is real data in the system. The mapping of legacy system data to a new ERP is a long difficult process, and choices must be made as to what data gets moved and what gets left behind. The only way to verify what you need to actually run in the new ERP environment is to migrate the data over to the ERP development and test environments and test it. The only way to get a smooth transition to a new ERP is to develop processes as automatic as possible to migrate the data from the old system to the new.
    Data loading is not a project that can be done after everything else is ready. Just defining the data in the legacy system is a huge horrible task. Actually mapping it to one of the ERP system schemas is a lesson in pain that must be experienced to be believed.
    The scope of a data migration project is usually a fairly large development process with a lot of proprietary code written to extract legacy data, transform and load the data into the ERP system. This process is usually called ETL (extract, transform, load.)
    How is data put into the ERP?
    There is usually a painfully slow data import facility with most ERP systems. Mashing data into the usually undocumented table schema is also an option, but must be carefully researched. Getting the data out of the legacy systems is usually left to the company buying the ERP. These export - import processes can be complex and slow, sometimes specialized ETL tools can help, sometimes it is easier to use what ever your programmers are familiar with, tools such as C, shell or perl.
    An interesting thing to note is that many bugs and quirks of the old systems will be found when the data is mapped and examined. I am always amazed at what data I find in a legacy system, usually the data has no relational integrity , note that it does not have much more integrity once it is placed in an ERP system so accurate and clean data going in helps to create a system that can work.
    The Business Analysts (BAs) that are good understand the importance of data migration and have an organized plan to migrate the data, allocate resources, give detailed data maps to the migrators (or help create the maps) and give space estimates to the DBAs. Bad BAs can totally fubar the ERP implementation. If the BAs and management cannot fathom that old data must be mapped to the new system, RUN AWAY. The project will fail.
    Check these links
    http://pdf.me.uk/informatica/AAHN/INFDI11.pdf
    http://researchcenter.line56.com/search/keyword/line56/Edwards%20Sap%20Migration%20Solutions/Edwards%20Sap%20Migration%20Solutions
    http://resources.crmbuyer.com/search/keyword/crmbuyer/Scm%20Data%20Migration%20On%20Peoplesoft%20Peoplesoft%20Data%20Migration/Scm%20Data%20Migration%20On%20Peoplesoft%20Peoplesoft%20Data%20Migration
    Good Luck and Thanks
    AK

  • Data  Migration from 4.7 to ECC 6.0

    Dear Friends,
    We are doing upgration from sap4.7 to ECC6.0.
    We have activated new gl. We have configured all ecc6.0 functionality.
    But we stuck on data migration from classic gl to new gl.
    We want to activate the migration cockpit but system is asking license key.
    My question is, which license key it is?
    From where we will get this?
    Kindly don't send me any links and any documents related to ecc6 functionality.
    Please guide us on this.
    Thanks & Regards,
    Reva Naik

    Hi,
    following link may give you some hints:
    http://help.sap.com/saphelp_erp2005vp/helpdata/en/2e/9e4638a28b2763e10000009b38f8cf/frameset.htm
    Regards
    Bernd

  • Migration from G4 iMac to new(ish) Macbook

    My daughter is graduating HS (yay!) and going to a great college in the fall (double yay!) and her grandma has given her a nearly-new white MacBook (2.16Ghz) (triple yay!) as a graduation present. Grandma, bless her heart, decided she needed a faster machine. Some people buy sports cars, Grandma likes having the newest and sleekest Mac. We love Grandma!
    I am reading up on the steps to help daughter migrate her data from the G4 iMac (which also came from grandma) to the Macbook. It seems fairly simple, and I've done Target Disk before so don't think I can go far wrong but if anyone can check/correct these steps:
    1. Backup everything important from the iMac. (mainly her photo and music libraries as all her work is HS related and she's happy to be rid of it) DONE
    2. MacBook has grandma's account and preferences set, so we should restore from the install disks to get a fresh start. Right?
    3. Once the MacBook is set up with her new account, migrate her stuff over via TDM. I think Migration Assistant won't work due to the disparity between machines.
    4. De-authorize the iMac from our family iTunes account (already maxed out) and authorize the MacBook.
    5. Install any needed software (MS Office & ?) from disks or .dmg.
    6. Either pass iMac on to sibling or completely wipe it and sell it for $10. It has some ailment that is causing me to have to reset the PMU, so I suspect imminent component failure. Excellent time to upgrade!
    Thanks for any tips, suggestions, or corrections!
    Alan

    Please read the following about migrating from a PPC to an Intel Mac:
    A Basic Guide for Migrating to Intel-Macs
    If you are migrating a PowerPC system (G3, G4, or G5) to an Intel-Mac be careful what you migrate. Keep in mind that some items that may get transferred will not work on Intel machines and may end up causing your computer's operating system to malfunction.
    Rosetta supports "software that runs on the PowerPC G3, G4, or G5 processor that are built for Mac OS X". This excludes the items that are not universal binaries or simply will not work in Rosetta:
    Classic Environment, and subsequently any Mac OS 9 or earlier applications
    Screensavers written for the PowerPC
    System Preference add-ons
    All Unsanity Haxies
    Browser and other plug-ins
    Contextual Menu Items
    Applications which specifically require the PowerPC G5
    Kernel extensions
    Java applications with JNI (PowerPC) libraries
    See also What Can Be Translated by Rosetta.
    In addition to the above you could also have problems with migrated cache files and/or cache files containing code that is incompatible.
    If you migrate a user folder that contains any of these items, you may find that your Intel-Mac is malfunctioning. It would be wise to take care when migrating your systems from a PowerPC platform to an Intel-Mac platform to assure that you do not migrate these incompatible items.
    If you have problems with applications not working, then completely uninstall said application and reinstall it from scratch. Take great care with Java applications and Java-based Peer-to-Peer applications. Many Java apps will not work on Intel-Macs as they are currently compiled. As of this time Limewire, Cabos, and Acquisition are available as universal binaries. Do not install browser plug-ins such as Flash or Shockwave from downloaded installers unless they are universal binaries. The version of OS X installed on your Intel-Mac comes with special compatible versions of Flash and Shockwave plug-ins for use with your browser.
    The same problem will exist for any hardware drivers such as mouse software unless the drivers have been compiled as universal binaries. For third-party mice the current choices are USB Overdrive or SteerMouse. Contact the developer or manufacturer of your third-party mouse software to find out when a universal binary version will be available.
    Also be careful with some backup utilities and third-party disk repair utilities. Disk Warrior 4.1, TechTool Pro 4.6.1, SuperDuper 2.5, and Drive Genius 2.0.2 work properly on Intel-Macs with Leopard. The same caution may apply to the many "maintenance" utilities that have not yet been converted to universal binaries. Leopard Cache Cleaner, Onyx, TinkerTool System, and Cocktail are now compatible with Leopard.
    Before migrating or installing software on your Intel-Mac check MacFixit's Rosetta Compatibility Index.
    Additional links that will be helpful to new Intel-Mac users:
    Intel In Macs
    Apple Guide to Universal Applications
    MacInTouch List of Compatible Universal Binaries
    MacInTouch List of Rosetta Compatible Applications
    MacUpdate List of Intel-Compatible Software
    Transferring data with Setup Assistant - Migration Assistant FAQ
    Because Migration Assistant isn't the ideal way to migrate from PowerPC to Intel Macs, using Target Disk Mode, copying the critical contents to CD and DVD, an external hard drive, or networking
    will work better when moving from PowerPC to Intel Macs. The initial section below discusses Target Disk Mode. It is then followed by a section which discusses networking with Macs that lack Firewire.
    If both computers support the use of Firewire then you can use the following instructions:
    1. Repair the hard drive and permissions using Disk Utility.
    2. Backup your data. This is vitally important in case you make a mistake or there's some other problem.
    3. Connect a Firewire cable between your old Mac and your new Intel Mac.
    4. Startup your old Mac in Target Disk Mode.
    5. Startup your new Mac for the first time, go through the setup and registration screens, but do NOT migrate data over. Get to your desktop on the new Mac without migrating any new data over.
    If you are not able to use a Firewire connection (for example you have a Late 2008 MacBook that only supports USB:)
    1. Set up a local home network: Creating a small Ethernet Network.
    2. If you have a MacBook Air or Late 2008 MacBook see the following:
    MacBook (13-inch, Aluminum, Late 2008) and MacBook Pro (15-inch, Late 2008)- Migration Tips and Tricks;
    MacBook (13-inch, Aluminum, Late 2008) and MacBook Pro (15-inch, Late 2008)- What to do if migration is unsuccessful;
    MacBook Air- Migration Tips and Tricks;
    MacBook Air- Remote Disc, Migration, or Remote Install Mac OS X and wireless 802.11n networks.
    Copy the following items from your old Mac to the new Mac:
    In your /Home/ folder: Documents, Movies, Music, Pictures, and Sites folders.
    In your /Home/Library/ folder:
    /Home/Library/Application Support/AddressBook (copy the whole folder)
    /Home/Library/Application Support/iCal (copy the whole folder)
    Also in /Home/Library/Application Support (copy whatever else you need including folders for any third-party applications)
    /Home/Library/Keychains (copy the whole folder)
    /Home/Library/Mail (copy the whole folder)
    /Home/Library/Preferences/ (copy the whole folder)
    /Home /Library/Calendars (copy the whole folder)
    /Home /Library/iTunes (copy the whole folder)
    /Home /Library/Safari (copy the whole folder)
    If you want cookies:
    /Home/Library/Cookies/Cookies.plist
    /Home/Library/Application Support/WebFoundation/HTTPCookies.plist
    For Entourage users:
    Entourage is in /Home/Documents/Microsoft User Data
    Also in /Home/Library/Preferences/Microsoft
    Credit goes to Macjack for this information.
    If you need to transfer data for other applications please ask the vendor or ask in the Discussions where specific applications store their data.
    5. Once you have transferred what you need restart the new Mac and test to make sure the contents are there for each of the applications.
    Written by Kappy with additional contributions from a brody.
    Revised 1/6/2009

  • Wiki migration from 10.6.8 to 10.10 Server 4 no data loaded

    I'm trying to migrate wiki from an old OSX Server 10.6.8 to a brand new installation of 10.10.1 with Server App ver 4.
    I followed the article at http://krypted.com/mac-os-x/setup-os-x-yosemite-server-as-a-wiki-server/
    I used the migration method:
    1) copy the /Library/Collaboration folder to Yosemite Server ~/Desktop/Collaboration
    2) changed owner to _teamserver of the copied folder
    3) started wiki service
    4) created a new wiki
    5) configure accounts to access LDAP of the old server
    6) run sudo wikiadmin migrate -r ~/Desktop/Collaboration
    No wiki were created. The log file shows that no users and groups found.
    Extract of the log:
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [CSConfig.m:128 7db9d300 +0ms] CSConfig: Initializing or updating cached config file: /Library/Server/Wiki/Config/collabd.plist because file mod date 2014-12-15 11:07:02 +0000 is newer than cached file mod date (null)
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [main.m:90 7db9d300 +0ms] Wikiadmin trying to start PGCServer...
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [main.m:96 7db9d300 +135ms] PGCServer started.
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [main.m:237 7db9d300 +0ms] Repository location appears to be a relative path (/Users/micei/desktop/Collaboration), prepending --sourceRoot ()
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [main.m:240 7db9d300 +0ms] Repository location is /Users/micei/desktop/Collaboration
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [main.m:255 7db9d300 +2ms] Updating schema to current version
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [SchemaMigrator.m:142 7db9d300 +0ms] Updating schema to latest schema version (166)
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [SchemaMigrator.m:148 7db9d300 +0ms] Bumping schema to version (166)
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [SchemaMigrator.m:351 7db9d300 +5ms] Schema updates completed.
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [main.m:258 7db9d300 +0ms] Running migration from source location /Users/micei/desktop/Collaboration
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [MigrationController.m:724 7db9d300 +0ms] Migrating...
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [MigrationController.m:730 7db9d300 +47ms] Migrating known users
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Warning>: [UserMigrator.m:259 7db9d300 +0ms] No Users directory found. Skipping.
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [MigrationController.m:732 7db9d300 +0ms] Generating placeholders for all known pages and wikis
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Warning>: [ProjectMigrator.m:259 7db9d300 +0ms] No Groups directory found. Skipping.
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [MigrationController.m:734 7db9d300 +0ms] Found 0 pages belonging to 0 wikis and 1 users.
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [MigrationController.m:737 7db9d300 +0ms] Re-scanning 0 pages for pasted image/attachment URLs
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [MigrationController.m:522 7db9d300 +0ms] Copying content to real tables...
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [MigrationController.m:627 7db9d300 +14ms] Destroying migration entity and scratch tables...
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [MigrationController.m:655 7db9d300 +45ms] Done
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [MigrationController.m:747 7db9d300 +0ms] Importing user preferences
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [MigrationController.m:693 7db9d300 +4ms] Rebuilding search index...
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [CSLocalServiceLocator.m:173 7db9d300 +1ms] Allocating service ContentService
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [CSConfig.m:128 cb84000 +33ms] CSConfig: Initializing or updating cached config file: /Library/Server/Wiki/Config/collabd-search.plist because file mod date 2014-12-15 11:07:02 +0000 is newer than cached file mod date (null)
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [CSLocalServiceLocator.m:173 cc07000 +0ms] Allocating service HTMLFilterService
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [MigrationController.m:719 7db9d300 +118ms] Done
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [MigrationController.m:753 7db9d300 +0ms] Migration complete
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [main.m:261 7db9d300 +0ms] Running post-migration updates
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [SchemaMigrator.m:366 7db9d300 +3ms] Running post-migration updates...
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [SchemaMigrator.m:104 7db9d300 +0ms] Executing migration SQL script /Applications/Server.app/Contents/ServerRoot/usr/share/collabd/server/sql/migra tions/36.sql
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [CSExecutionTimer.m:14 7db9d300 +6ms] TIMER: 6ms ---> migration #36
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [SchemaMigrator.m:104 7db9d300 +0ms] Executing migration SQL script /Applications/Server.app/Contents/ServerRoot/usr/share/collabd/server/sql/migra tions/48.sql
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [CSExecutionTimer.m:14 7db9d300 +1ms] TIMER: 2ms ---> migration #48
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [SchemaMigrator.m:116 7db9d300 +0ms] Executing migration block #58
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [CSExecutionTimer.m:14 7db9d300 +1ms] TIMER: 2ms ---> migration #58
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [SchemaMigrator.m:104 7db9d300 +0ms] Executing migration SQL script /Applications/Server.app/Contents/ServerRoot/usr/share/collabd/server/sql/migra tions/68.sql
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [CSExecutionTimer.m:14 7db9d300 +2ms] TIMER: 3ms ---> migration #68
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [SchemaMigrator.m:116 7db9d300 +0ms] Executing migration block #160
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [CSExecutionTimer.m:14 7db9d300 +0ms] TIMER: 1ms ---> migration #160
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [SchemaMigrator.m:374 7db9d300 +0ms] Post-migration updates completed.
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [SchemaMigrator.m:379 7db9d300 +6ms] Performing a VACUUM FULL ANALYZE of user_activity
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [main.m:264 7db9d300 +39ms] Done
    Dec 16 12:35:20 Micheles-iMac.local wikiadmin[1307] <Info>: [main.m:516 7db9d300 +0ms] Wikiadmin trying to stop PGCServer...
    Dec 16 12:35:21 Micheles-iMac.local wikiadmin[1307] <Info>: [main.m:519 7db9d300 +130ms] PGCServer stopped.
    Please help me

    I have a bad 10.8 server, and what I did to rescue my WIKI
    1- Listed users on old wiki server " dscacheutil -q user | grep -A 3 -B 2 -e uid:\ 5'[0-9][0-9]'    ", I dont use open directory
    2- Create an new 1.8.5 server
    3- Create users on new server (keep an eye on order of creation ID is more important than username)
    4- Turned on Wiki *Important starts up service for 1st time
    5- Then followed these instuctions on:  http://support.apple.com/en-us/HT5585
    *Wiki worked
    6 -Then Upgraded form 10-8-5 to Yosemithe server
    ALL WORKS
    *Note1: Apple's KB note has 10.6 to 8 migration which I did not use, try that before
    *Note2: Clone new server each important step with CC or DiskUtil, step 5 trashed compleatly the server at one time, never knew what happened

Maybe you are looking for