Cloning Databases - Anyone using Direct NFS & clonedb?

Looking to see what experiences other customer have had using/trying Direct NFS (dNFS) and clonedb - a newer feature available in 11.2.0.2 (and above) - described in MOS Note 1210656.
In particular we are looking at this for cloning a Production database for testing changes.
Thanks!
Marty

Tim Hall had a presentation in Sydney InSync about it some times ago. You should be able to find his experience in his website.
Here it is: http://www.oracle-base.com/articles/11g/Clonedb_11gR2.php
Edited by: AliD on 20/10/2011 21:26

Similar Messages

  • How do I verify Direct NFS is being used by RMAN

    I have a two node 11.2.0.2 RAC database running on AIX 6.1 TL6. We using NetApp storage for NFS. Direct NFS is configured using the /etc/oranfstab entries and the ODM libraries are loaded on each instance "Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 3.0". Here are the contents of /etc/filesystems
    /export:
    dev = /vol/volnshr001_oraexport/oraexport
    vfs = nfs
    nodename = filer1
    mount = true
    options = rw,bg,hard,rsize=32768,wsize=32768,timeo=600,vers=3,proto=tcp,noac
    account = false
    /archivelogs:
    dev = /vol/volnshr001_oraclearchive
    vfs = nfs
    nodename = filer1
    mount = true
    options = rw,bg,hard,rsize=32768,wsize=32768,timeo=600,vers=3,proto=tcp,noac
    account = false
    /archivelogs2:
    dev = /vol/volnshr002_oraclearchive
    vfs = nfs
    nodename = filer2
    mount = true
    options = rw,bg,hard,rsize=32768,wsize=32768,timeo=600,vers=3,proto=tcp,noac
    account = false
    The oranfstab has the following entries:
    server: filer1
    path: 192.168.2.22
    export: /vol/volnshr001_oraclearchive mount:/archivelogs
    server: filer2
    path: 192.168.2.23
    export: /vol/volnshr002_oraclearchive mount:/archivelogs2
    server: filer1
    path: 192.168.2.22
    export: /vol/volnshr001_oraexport/oraexport mount:/export
    I tested if dNFS is being used by creating a tablespace on the NFS volumes and then querying v$dnfs_servers.
    SQL> create tablespace dnfs_ts datafile '/archivelogs/tiboratst/tibuat/dnfs_ts_01.dbf' size 100M;
    Tablespace created.
    SQL> !
    oracle@tibora30t[tibuat1]-/u01/oracle/11.2.0/dbhome_1/lib >ls -ltr /archivelogs/tiboratst/tibuat
    total 24811120
    -rw-r----- 1 oracle asmadmin 2896360960 Aug 18 17:59 1fmkanau_1_2
    -rw-r----- 1 oracle asmadmin 2896360960 Aug 18 17:59 1fmkanau_1_1
    -rw-r----- 1 oracle asmadmin 3377919488 Aug 18 17:59 1emkanau_1_2
    -rw-r----- 1 oracle asmadmin 3377919488 Aug 18 17:59 1emkanau_1_1
    -rw-r----- 1 oracle asmadmin 104865792 Aug 18 18:01 dnfs_ts_01.dbf
    oracle@tibora30t[tibuat1]-/u01/oracle/11.2.0/dbhome_1/lib >exit
    SQL> select * from v$dnfs_servers;
    ID
    SVRNAME
    DIRNAME
    MNTPORT NFSPORT WTMAX RTMAX
    1
    filer1
    /vol/volnshr001_oraclearchive
    4046 2049 65536 65536
    The alert logs also displayed the following messages:
    Direct NFS: channel id [0] path [192.168.2.22] to filer [filer1] via local [] is UP
    Direct NFS: channel id [1] path [192.168.2.22] to filer [filer1] via local [] is UP log also displayed the following messages:
    However, when I try to do an RMAN backup I'm not seeing any messages which indicate whether or not dNFS is being used. I tried to to a backup using the second mount (/archivelogs2) but the corresponding server (filer2) was not visible in v$dnfs_servers.
    Is there any special configuration necessary for RMAN backups to use dNFS?
    Thanks,
    Leighton

    Tiger install will take up 12gb if you don't go through and do a custom install and remove stuff that you don't need.
    There is a program that you can download that tells you the size of each folder, get it here: http://www.macupdate.com/info.php/id/13006

  • Can we directly access database by using BI Accelerator ?

    Hi Experts,
    I have several question about BIA NetWeaver2004:
            1. Can we directly accessing database by using BIA?
            2. What is advantage and disadvantage of BIA? Why'd some people rather use BIA than Aggregates?
            3. Could anyone show me the example or demonstration about BIA
    I am a newbie in BI, so please give me information about BIA.
    Thanks a bunch.

    Hi,
    Welcome to SDN. Remember the rule --- before posting, you should search the forum first.
    1. Can we directly accessing database by using BIA?
    ---> No.
    2. What is advantage and disadvantage of BIA? Why'd some people rather use BIA than Aggregates?
    --> search SDN
    3. Could anyone show me the example or demonstration about BIA
    --> search SDN
    Thanks...
    Shambhu
    Edited by: Shambhu Kumar Gupta on Mar 17, 2009 11:22 AM

  • Filter on the reports created using DIRECT DATABASE ACCESS in obiee 10g

    How do i filter on the report that is created by using DIRECT DATABASE ACCESS in obiee 10g?
    I have reprot A with link to report B, report B is created using DIRECT DATABASE ACCESS. so it is just a table, but i need to filter out 2 columns.
    here is the query in REPORT B:
    select strm, acad_group, crse_id, class_section, count(emplid) from v_crse_enrl
    group by crse_id, strm, class_section, acad_group;
    the link in REPORT A:
    '<a target="_blank" href="'||VALUEOF("bi_link")||'Go&Path=CF_CROSS_TEACHING_LIST&ACTION=Navigate&col1=STRM&val1='||view1_DIMEN.STRM||'&col2=CLASS_SECTION&val2='||TRIM(view1_DIMEN.CLASS_SECTION)||'">' ||view1_dimen.class_id||'</a>'
    from this link to report B.
    thank you!!

    I didnt tested using url, but the same works with using presentation variables.
    To your direct sql add where clause for those two columns like WHERE col='@{col1}' and col ='@{col2}'
    and try to run from url.
    let me know updates

  • Anyone using Database 11g with UCM?

    Anyone using Database 11g with UCM? Can't find anything on the customer reference site. Customer wants to be reassured that they are not the first, etc...
    Thx.
    ft

    I know of a few companies that are using 11g ;)
    They wont be the first.

  • Discussion Forum Portlets - Anyone using in Production?

    This is a followup to another post.
    Is anyone using the sample Discussion Forum and Discussion Forum Admin portlets provided in WebLogic Portal 8.1 in a production environment? I've already received feedback from one person who had difficulties and decided not to use them, but I wanted to know if there were any success stories.

    Bryan,
    <p>I also put a lot of weight in Tom’s posts; I am however convinced that contacting Oracle Support is a better solution for direct, bug related questions. I do, for example not doubt that his human tasks did not work when put in parallel flows. There might be workarounds, such as putting the human workflow stuff in a separate service, but to me sounds like a bug and bugs should be reported, analyzed, and fixed.</p>
    <p>As you can see from my profile, I do not live in Poland so I do not know about the training situation there, but I have, personally, delivered five SOA Suite classes so far during 2007 and I know that I am not the only one running SOA training in Sweden. I might be wrong, but based on my own experience, I have assumed that the lack of training that Tom is experiencing is due to communication problems, and that contacting Oracle Education would sort things out. (Training in Europe tends to be given in the local language. A quick look at the web shows that the next 5 day SOA Suite training in Poland seems to be scheduled for November 19th.)</p>
    <p>On the topic of frequent changes in interfaces as mentioned above, I agree that an update of the table structure in a database must never cause business process updates. If that is the case, the architecture most likely suffers from tight coupling. I never recommend anyone to use adapters direct from BPEL try to but use (coarse gained) services published on the service bus as much as possible to avoid such problems. For an introduction to SOA best practices see (for example) Dave Schaffer’s 7 Steps to SOA in SOA World Magazine.</p>
    <p>I am sorry if you feel that my posts are not providing the level of support you where hoping to find. Perhaps it is our expectations community/forums vs official product support that differs. Regarding whether Oracle has figured it out or not, you do not have to trust me – take a look at what InfoWorld or Forrester are saying.</p>
    <p/>
    Best,<BR>
    Mathias

  • AUXILIARY database update using full backup from target database

    Hi,
    I am now facing the problem with how to implement AUXILIARY database update to be consistent with the target database during a certain period (a week). I did a fully backup on our target database everyday using rman. I know it is possible to use expdp to realize it but i want to use the current fully backup to do it. Does anybody has idea or experience with that? Thanks in advance!
    Regards,
    lik

    That's OK. If you don't use RMAN to clone your database. You can create a database just using the cold backup of the primary database simply.
    Important things are
    1) you must catalog all datafiles as image copy level 0 in the cloned database
    RMAN> connect catalog rman/rman@rcvcat (in host 1)
    RMAN> connect target sys/manager@clonedb (in host 2)
    RMAN> catalog datafilecopy
    '/oracle/oradata/CLONE/datafile/abc.dbf',
    '/oracle/oradata/CLONE/datafile/def.dbf',
    '/oracle/oradata/CLONE/datafile/ghi.dbf'
    level 0 tag 'CLONE';
    2) You need to make incrementals of the primary database to refresh the clone database.Make sure that you need to specify a tag for the incremental and the name of tag is the exactly same as the one used step (1).
    RMAN> connect catalog rman/rman@rcvcat (in host 1)
    RMAN> connect target sys/manager@prod (in host 3)
    RMAN> backup incremental level 1 tag 'CLONE' for recover of copy with tag 'CLONE' database format '/backup/%u';
    3) Copy the newly created incrementals (in host 3) to the clone database site (host 2). Make sure the directory must be exactly same.
    $ rcp /backup/<incr_backup> /backup/
    -- rcp <the loc of a incremental in host 3> <the loc of a incremental in host 2>
    4) Apply incrementals to update the clone database. Make sure you provide the tag you specified.
    RMAN> connect catalog rman/rman@rcvcat
    RMAN> connect target sys/manager@clone
    RMAN> recover copy of database with tag 'CLONE';
    5) After update the clone database, then delete the incremental backups and uncatalog the image copies
    RMAN> delete backup tag 'CLONE';
    RMAN> change copy like '/oracle/oradata/CLONE/datafile/%' uncatalog;
    *** As you can see, you can clone a database using any methods. The key is you have to catalog the clone database when you refresh it. After finishing it, then uncatalog..

  • Has anyone used JAAS with WebLogic?

    Has anyone used JAAS with Weblogic? I was looking at their example, and I have a bunch of questions about it. Here goes:
    Basically the problem is this: the plug-in LoginModule model of JAAS used in WebLogic (with EJB Servers) seems to allow clients to falsely authenticate.
    Let me give you a little background on what brought me to this. You can find the WebLogic JAAS example (to which I refer below) in the pdf: http://e-docs.bea.com/wls/docs61/pdf/security.pdf . (I believe you want pages 64-74) WebLogic, I believe goes about this all wrong. They allow the client to use their own LoginModules, as well as CallBackHandlers. This is dangerous, as it allows them to get a reference (in the module) to the LoginContext's Subject and authenticate themselves (i.e. associate a Principal with the subject). As we know from JAAS, the way AccessController checks permissions is by looking at the Principal in the Subject and seeing if that Principal is granted the permission in the "policy" file (or by checking with the Policy class). What it does NOT do, is see if that Subject
    has the right to hold that Principal. Rather, it assumes the Subject is authenticated.
    So a user who is allowed to use their own Module (as WebLogic's example shows) could do something like:
    //THEIR LOGIN MODULE (SOME CODE CUT-OUT FOR BREVITY)
    public class BasicModule implements LoginModule
    private NameCallback strName;
    private PasswordCallback strPass;
    private CallbackHandler myCB;
    private Subject subj;
             //INITIALIZE THIS MODULE
               public void initialize(Subject subject, CallbackHandler callbackHandler, Map sharedState, Map options)
                      try
                           //SET SUBJECT
                             subj = subject;  //NOTE: THIS GIVES YOU REFERENCE
    TO LOGIN CONTEXT'S SUBJECT
                                                     // AND ALLOWS YOU TO PASS
    IT BACK TO THE LOGIN CONTEXT
                           //SET CALLBACKHANDLERS
                             strName = new NameCallback("Your Name: ");
                             strPass = new PasswordCallback("Password:", false);
                             Callback[] cb = { strName, strPass };
                           //HANDLE THE CALLBACKS
                             callbackHandler.handle(cb);
                      } catch (Exception e) { System.out.println(e); }
         //LOG THE USER IN
           public boolean login() throws LoginException
              //TEST TO SEE IF SUBJECT HOLDS ANYTHING YET
              System.out.println( "PRIOR TO AUTHENTICATION, SUBJECT HOLDS: " +
    subj.getPrincipals().size() + " Principals");
              //SUBJECT AUTHENTICATED - BECAUSE SUBJECT NOW HOLDS THE PRINCIPAL
               MyPrincipal m = new MyPrincipal("Admin");
               subj.getPrincipals().add(m);
               return true;
             public boolean commit() throws LoginException
                   return true;
        }(Sorry for all that code)
    I tested the above code, and it fully associates the Subject (and its principal) with the LoginContext. So my question is, where in the process (and code) can we put the LoginContext and Modules so that a client cannot
    do this? With the above example, there is no Security. (a call to: myLoginContext.getSubject().doAs(...) will work)
    I think the key here is to understand JAAS's plug-in security model to mean:
    (Below are my words)
    The point of JAAS is to allow an application to use different ways of authenticating without changing the application's code, but NOT to allow the user to authenticate however they want.
    In WebLogic's example, they unfortunately seem to have used the latter understanding, i.e. "allow the user to authenticate however they want."
    That, as I think I've shown, is not security. So how do we solve this? We need to put JAAS on the server side (with no direct JAAS client-side), and that includes the LoginModules as well as LoginContext. So for an EJB Server this means that the same internal permission
    checking code can be used regardless of whether a client connects through
    RMI/RMI-IIOP/JEREMIE (etc). It does NOT mean that the client gets to choose
    how they authenticate (except by choosing YOUR set ways).
    Before we even deal with a serialized subject, we need to see how JAAS can
    even be used on the back-end of an RMI (RMI-IIOP/JEREMIE) application.
    I think what needs to be done, is the client needs to have the stubs for our
    LoginModule, LoginContext, CallBackHandler, CallBacks. Then they can put
    their info into those, and everything is handled server-side. So they may
    not even need to send a Subject across anyways (but they may want to as
    well).
    Please let me know if anyone sees this problem too, or if I am just completely
    off track with this one. I think figuring out how to do JAAS as though
    everything were local, and then putting RMI (or whatever) on top is the
    first thing to tackle.

    Send this to:
    newsgroups.bea.com / security-group.

  • The type or namespace name 'SQLite' could not be found (are you missing a using directive or an assembly reference?)

    Trying to get sql server compact up and running. I connected to a sqlite database and atuo generated the model.cs code.
    however, I get the following error.
    Error    1    The type or namespace name 'SQLite' could not be found (are you missing a using directive or an assembly reference?)    C:\Users\kenny\Documents\Visual Studio 2013\Projects\WindowsFormsApplication2\WindowsFormsApplication1\Model.cs  
     3
    Any help is greatly appreciated. Please provide some detail as I am novice. 
    Thanks

    Ok so I got the past that problem. I had to add the package System.Data.SQLite Core in Manage NuGet.
    Then I changed the auto-generated code "using SQLite" to "using System.Data.SQLite".
    But now I have a new problem, it says "System.Data.SQLite.SQLiteConnection' does not contain a definition for 'CreateTable' "
    Here is the code
    //This code was generated by a tool.
    //Changes to this file will be lost if the code is regenerated.
    //using SQLite;
    using System;
    using System.Data.SQLite;
    using System.Data.SQLite.EF6;
    namespace WindowsFormsApplication2
    public class SQLiteDb
    string _path;
    public SQLiteDb(string path)
    _path = path;
    public void Create()
    using (SQLiteConnection db = new SQLiteConnection(_path))
    db.CreateTable<servers>();
    public partial class servers
    public Int64 server_name { get; set; }
    public Double cpu_usage { get; set; }
    Any ideas why CreateTable is not available?

  • Problem in opening cloned database

    i have cloned database with name test to test1 in the same oracle home.My problem is that i can't able to open the cloned database
    c:\>set oracle_sid=test1
    c:\>sqlplus /nolog
    SQL>conn sys/oracle as sysdba;
    ORA-12560 TNS:Protocol adapter Error.
    But if i use the folowing procedure i can able to open the cloned database
    c:\>set oracle_sid=test(NOte test is the source database not clonned database)
    c:\>Sqlplus /nolog
    SQL>conn sys/oracle as sysdba
    SQl>startup nomount pfile='c:\oracle\pfile\inittest1.ora'(this is the location of init.ora file for cloned database)
    SQL>alter database mount;
    SQL>alter database open;
    Now cloned database is up and running.
    Now if i try to open the original database (TEST) then i am getting the error
    c:\>set oracle_sid=play
    c:\>sqlplus "/nolog
    SQL>conn sys/oracle as sysdba
    SQL>startup
    ERROR:database is already up and running
    SQL>select name from v$database;
    name
    Play1
    Can any one help me to open two databases at same time.
    Thankx...

    1) ordim
    Create an instance by specifying the following options:
    -NEW -SID sid | -SRVC srvc | -ASMSID sid | -ASMSRVC srvc [-SYSPWD pass]
    [-STARTMODE auto|manual] [-SRVCSTART system|demand] [-PFILE file | -SPFILE]
    [-SHUTMODE normal|immediate|abort] [-TIMEOUT secs] [-RUNAS osusr/ospass]
    expample
    oradim -NEW -SID orcl2
    2) Password creation
    orapwd file=orapwd<SID> password=oracle entries=5
    Virag

  • What kind of throughput should I expect? Anyone using AQ in high volume?

    Hi,
    I am working with AQ in a 10.2 environment and have been doing some testing with AQ. What I have is a very simple Queue with 1 queue table. The queue table structure is:
    id number
    message varchar(256)
    message_date date
    I have not done anything special with storage paramteres, etc so it's all defalt at this point. The I created a stored procedure that will generate messages given message text and number of times to loop. When I run this procedure with 10,000 iterations it runs in 15 seconds (if I commit all messages at the end) and 24 seconds if I commit after each message (probabliy more realistic).
    Now, on the same database I have a straight table that contains one column (message varchar(256)). I have also created a similiar storage procedure to insert into it. For this, 10,000 inserts takes about 1 second.
    As you can see there is an order of magnitude of difference so I am looking to see if others have been able to achieve higher throughput than 500-700 messages per second and if so what was done to achieve it.
    Thanks in advance,
    Bill

    Yes, I have seen it. My testing so far hasn't even gotten to the point of concurrent enqueue/dequeue. So far I have focused on enqueue time and it is dramatically slower than a plain old database table. That link also discussed mutliple indexed organized tables being created behind the scenes. I'm guessing that the 15X factor I am seeing is because of 4 underlying tables, plus they are indexed organized which adds additional overhead.
    So my question remains - Is anyone using AQ for high volume processing? I suppose I could create a bunch of queues. However, that will create additional management on my side which is what I was trying to avoid by using AQ in the first place.
    Can one queue be served by multiple queue tables? Can queue tables be partitioned? I would like to minimize the number of queue so that the dequeue processes don't have to contain multiplexed logic.
    Thanks

  • Problem in getting data into database with standard direct input program

    HI All,
    I am having problem which is not updating the records in MM01 or MM02 with standard direct input program. i have data in internal table. from that table i am trying to upload into database by using background job MRP_MATERIAL_MASTER_DATA_LOAD.
    when i execute my program it is showing message job is started. then i go into sm37 and seethe job status by executing. there also i am seeing job completed succesfully.
    but if i go to mm03 and find the materials are updated or created there i couldn't find the material numbers which are from internal table.
    So if ny one help me it wil be great.
    Thanks in Advance
    Venkat N

    Hi Anil,
    Thanks for your answer, but i am facing problem is i have material no and denominator and Actual UOM and nominator field values in the flat file.....
    by using RMDATIND direct input program with MRP_MATERIAL_UPLOAD as job name for background job while uploading data into database.
    here i am not getting data in to database, but when i execute the job in sm37 it is showing that message job processing successfully completed...this is my status..
    if u can help me in this it will be gr8ful..
    Thanks,
    Venkat N

  • Updating tables in Cloned Database

    I have a cloned of my production database on another server, let say server 2. I will like to, on a weekly basics copy update made to all tables on the production database on server 1 to this cloned database on server 2.
    What is the best way I can go about to accomplish this? Right now I am manually copying table over old table using data pump.
    Any suggestion will be greatly appreciated.

    If it is all of the tables in a set of schemas, then you should be able to do this:
    iexpdp user/password schemas=a,b,c,,... content=data_only ... directory=my_dir dumpfile=my_data_only.dmp
    If you want the data overwritten then you could use:
    impdp user/password directory=my_dir dumpfile=my_data_only.dmp table_exists_action=truncate
    If you have ref constraints you will have to disable those ref constraints.
    If you want to do this in one command and you have a network link created on the target database pointing to the source database then while on the target database issue this command:
    impdp user/password schemas=a,b,c,... directory=my_dir network_link=source_db_link content=data_only table_exists_action=truncate
    This will eliminate the dumpfile, so you don't have to copy it over, and you also can do it in one command.
    Hope this helps.
    Dean

  • Cloned database

    Hi
    What are all the scenarios in whcih we have to give '
    alter database open resetlogs
    is it necessary that we have to give this after performing the incomplete recovery or after changing the database name from the backedup trace controlfile
    when we have to give this .please say me
    one more thing
    if i cloned a database named "gem" from the source database "gem" then i can apply the archive logs of the source database in the target database and rollforward the changes in the cloned database until the database name is gem.If i created the target database with name "gem1" from the source database "gem" then is it possible to rollforward the changes in the cloned database by applying the archivelogs generated from the source database ???
    iam confused.please clear my doubts
    Regards
    'Aram

    What are all the scenarios in whcih we have to give '
    alter database open resetlogsWhen all else fails, Read The Fine Manual
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_1004.htm#i2079942
        *      You must specify RESETLOGS:
              o            After performing incomplete media recovery or media recovery using a backup controlfile
              o            After a previous OPEN RESETLOGS operation that did not complete
              o            After a FLASHBACK DATABASE operation
        *      If a created controlfile is mounted, then you must specify RESETLOGS if the online logs are lost, or you must specify NORESETLOGS if they are not lost.

  • Has anyone used "Microsoft.BizTalk.B2B.PartnerManagement" API ?

    Hello, forum; trying to make sense of that undocumented API library to create an agreement using BizTalk 2010. So far, I can create a Partner, Business Profiles, Business Identities and Partnerships, but how to create an agreement (onewayagreements and all
    that) still eludes me. I have checked one reference, it was written by Mick Breeze (http://blogs.breeze.net/mickb/2012/05/22/BizTalk2010CreatingTPMPartnersThroughCode.aspx),
    and also getting inspired by the newest published in here (http://msdn.microsoft.com/en-us/library/windowsazure/dn232396.aspx) which is not the same but at least it's something.
    I have navigated through the classes but I can't manage to do all the tasks that it is suppose to achieve.
    Has anyone used this before that may be able to help ? Is there any SDK sample of some sort ?
    Thanks in advance.

    Partner automation. The user wants to use an asp.net form instead of using biztalk admin console, in order to create and update a large party database, the total count rounding between 300 to 500 partners, so, we have already a webform with all business
    related information and we've add some port related information that will ultimately end up as the party information. With that info, this library should be able to accomplish everything we normally do from the console.
    We are not migrating from a previous version, so the party migration tools is a no-go. We explored the idea of creating a binding-like file that would be loaded from BtsTask, but ultimately, we end up with the idea of using this obscure library to perform
    all those so called duties of handling party related information on biztalk server.
    Thanks in advance.

Maybe you are looking for

  • Problem in displaying data in sapscript

    Hi, I am working on the sapscript for the tcode QC21 and the form name is 'YQM_QCERT_02'. here the data is represented in the column wise and i want to disply the data row wise. Plzz provide me guidlines how to display the row wise data in the SAPSCR

  • Communication between processes in different ear files

    I have two processes that I created in WLI 10. Each process is deployed to the same Weblogic 10 server but they are in different ear files. How can I send messages between the two WLI processes without making a web service call? Im trying to find an

  • Error while syncing MI client 7.0 on PDA

    Hello experts. I have got a problem with syncing MI client on PDA (actually it is just WM 6.0 emulator). Here is our current environment:      SAP CRM 6.0; MI 7.0 SP15 (JSP); MSAHH 5.0 SR2 SP8; WM 6.0; DB2E 9.1 CrEme 3.29 Here is the error trace: [20

  • Statistics in EP6

    Hi all, How can I create a access statistics with EP6 ? Thanks in advance, David

  • Can I draw a gantt chart using Numbers

    wanting to construct a detailed gantt chart. Can it be done or is it better to buy/download another product? any recommendations?