Database partitions

Hi,
Please can anyone tell me the significance of data partitioning in terms of space issues?
We have a table in our databse which will store upto 100 GB of data. This table is partitioned by range. The data from this table is not deleted anytime, but it keeps accumulating after every daily/monthly run.
So my doubt is, how the data is maintained in this table in such huge amounts through database partitioning without posing any housekeeping problems? Also, if i want to take a certain amount of data from this table into a separate storage, say a disc, how database partitioning helps me in doing that?
Thanks.

Billy, I am really an amateur in this forum. I did not post it in multiple forums intentionally.
First I posted both database partition and Business Objects queries in Database-General forum.
When I did not get any responses there for some time, I decided to post Database partition in pl/sql forum and Business Objects in Objects forum.
And if you notice, I have got the responses to my queries only after I posted them again in two different forums. This means I was in the wrong forum first time. Since I am using this forum newly, I was really not aware of the etiquettes to be followed here. And I apologise for that.
But you should not assume everyone to be a prank. I am genuinely working in a Top Internaltional Software Consultancy Services company and I urgently needed the information on my query because of which I posted it again in another forum.
I will keep in mind in future that I dont post queries in multiple forums. But it was really disappointing to see the culture of addressing people in this forum. Till now this was the only descent forum I came across where people are really serious. But it was really disappointing to address a mistake as a prank.

Similar Messages

  • NW 7.3 specific - Database partitioning on top of logical partitioning

    Hello folks,
    In NW 7.3, I would like to know if it is possible to add a specific database partition rule on top of a logical partitioned cube. For example, if I have a LP cube by fiscal year - I would also like to specifically partition all generated cubes at DB level. I could not find any option in the GUI. In addition, each generated cube can be viewed only (cannot be changed in the GUI). Would anybody know if it is possible?
    Thank you
    Ioan

    Fair point! Let me explain more in details what I am looking for - in 7.0x, a cube can be partitioned at the DB level by fiscal period. Let's suppose my cube has only fiscal year 2011 data. If I partition the cube at the DB level by fiscal period in 12 buckets, I will get 12 distinct partitions (E table only) in the database. If the user runs a query on 06/2012, then the DB will search for the data only in the 06/2012 bucket - this is obviously faster than  browsing entire cube (even with indexes).
    In 7.3, cubes can be logical partitioned (LP). I created a LP by fiscal year - so far so good. Now, I would like to partition at the DB level each individual cube created by the LP. Right now I could not - this means that my fiscal year 2012 cube will have entire data residing in only 1 large partition, so a 06/2012 query will take longer (in theory).
    So my question is --> "Is it possible to partition a cube generated by a LP in fiscal period buckets"? I believe the answers is no right now (Dec 2011).
    By the way, all the above is true in a RDBMS environment - this is not a concern for BWA / HANA since data is column based and stored in RAM (not same technology as RDBMS).
    I hope this clarifies by question
    Thank you
    Ioan

  • Should I use Database Partitioning now ?

    Hi,
    I have an Employee table in an HR system, consist of data of 10 branch, total 3,5 million rows, size about 8-10 GB.
    Each branch only allowed to access data of their branch.
    BUT some department could access all data.
    Given this information, should I implement database partitioning on this table ?
    If yes, what are the reasons ?
    If No, also what are tke reasons ?
    Thank you for your help,
    xtanto

    Sometimes, Partitioning is not necessaryly increases performance as we thinks.
    We are you going to have DML regularly on these tables?
    Read the following
    http://asktom.oracle.com/pls/ask/f?p=4950:8:2270402190261165858::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:14188311731917

  • Database partitioning

    Hi All,
    database partitioning is for centralized enrollment or distributed or both?
    Is it reduce data that transfer from secondary storage to primary storage ?
    Many thanks

    Ayham wrote:
    database partitioning is for centralized enrollment or distributed or both?
    Is it reduce data that transfer from secondary storage to primary storage ?You need to clarify your question and put some context to it. Oracle has a feature called Partitioning Option and it deals with partitioning of tables. Oracle does not have a feature called database partitioning... which sounds more like a Hadoop kind of feature.
    Not does any of this have relevance to the SQL and PL/SQL languages within the basic context you asked your question. Database issues should be raised in {forum:id=61}.

  • Oracle Database Partitioning Feature on Azure VM

    Hi,
    we would like to know if Oracle Database partitioning feature is available and activable on Azure virtual machine with Oracle pre-installed database instance.
    Thanks

    Hi,
    Oracle Database 11g R2 EE with Advanced Options includes Partitioning.
    Create a virtual machine from here:, check this link:
    http://azure.microsoft.com/en-in/marketplace/partners/msopentech/oracle-db-11g-db-11g-ee-all/
    Regards,
    Azam khan

  • Database Partitioning Problem

    hi
    i am to work out on transparent partition for one of our databases.
    During the partition set up, in the 1st step, i unchecked the Check box, so that my outline modifications should flow from both the sides, not exactly from source outline alone.
    but, once the outline is modified, say a member is added/deleted from the outline, in the partitioned area, may it be in the source/target database, and once i say "synchronise outline"
    a mesage pops up saying, both the outlines are in synchromisation but if i validate the same, it throws error as because the ouline are not really synchronised, either the source/target modified has extra members in the sharing/parition area.
    but during the 1st step, if the select the option, check the checkbox, saying the outline modifications flow in the same as dataflow, then outline sychronisation works but always, only source database is taken as refence and once i add a member in target partitioned database outline, it is getting deleted after synchronisation.
    please suggest me on this.
    thanks in advance

    Thanks. One question, if I export a table which will carry also its partitions. Is it possible, that i will drop the said table and recreate that said table but with a new partitions parameters then the backup that i made, i will import it to the newly created table with its new partition setup?
    Thanks,
    Hades

  • Automated Database Partitioning

    Hello,
    I am looking for some insight on how to create a Perl script or Java program which would allow for automated partitioning of an Oracle 10g database. Presently, partitioning is being done manually and this process is taking quite some time to complete due to the size of the database. Therefore, it was suggested that perhaps creating a Perl script or Java program would serve to simplify this process. However, I'm fairly novice with writing Perl scripts and Java for the purposes of Oracle database administration, so any examples or guidance in this respect would be appreciated.
    In addition, if there is a more efficient way to perform the task of automated partitioning via internal features of the 10g database, I'd like to pursue that route if possible.
    Thanks in advance,
    Scott.

    After further analysis, I've decided that I would like to code a procedure, etc. in PL/SQL which can be sheduled via a UNIX crontab. With this in mind, I've included a "spec" of what I'd like to accomplish. I'm looking for some insight regarding the automation of table partitioning. Has anyone done this before and if so, am I on the right path? Any examples, advice would be appreciated.
    Here's my "spec":
    Several large tables need to be partitioned on a daily basis. The partitions are stored in their own tablespaces.
    1. Create a table that will contain info about the partitioned tables.
    Table Name: PARTITION_STAT
    Columns: table_name, tablespace_name, partition_frequency (e.g. daily, weekly, etc), partition_name, max_partition_date
    Note: This table will need to be udpated whenever a new table using partitioning is created.
    2. Create a table that will contain the rowid of the last tablespace created.
    Table Name: TABLESPACE_STAT
    Column(s): last_tablespace_rowid
    PL/SQL package requirements:
    Need to create a procedure that will run automatically. The logic requirements are as follows:
    1. Check to see if a new tablespace has been created. This is done by checking TABLESPACE_STAT.last_tablespace_rowid. All rowid's greater than last_tablespace_rowid are new tablespaces. Update the last_tablespace_rowid in the TABLESPACE_STAT table
    2. Create partitions for the new tablespaces. For every new tablespace: create new partion and update the PARTITION_STAT table
    3. Check to see if new tablespaces are required and send email notification to DBA if tablespaces are needed. This is accomplished by checking every max_partition_date and partition_frequency in PARTITION_STAT table. Send an email to DBA with list of tables that will require new tablespaces.
    I realize that this is a very general overview of what I am looking to accomplish. Any assistance would be greatly appreciated.
    Thanks and sorry for the long post.
    Scott.

  • Data Partitioning in Database

    Hi All,
    I have little bit confuse in data partitioning in database, I have read about it and i understand that there ate two type called vertical partitioning and another horizontal partitioning...
    I have three questions are
    1- is data partitioning used of in networks or can be in one PC?
    2- the data partitioning divided data that in table to partitions (groups) , according what ??? is to quantity or meaning of data that inside table?
    3- Is clustering that can be execute by Oracle using CTX_CLS.CLUSTERING type of it or partitioning not related to it?
    Please let any one give short answer about it.
    regards
    Dheya

    973907 wrote:
    I have little bit confuse in data partitioning in database, I have read about it and i understand that there ate two type called vertical partitioning and another horizontal partitioning...These are basic partitioning concepts. Vertical means taking something like a single 100 column row table, and splitting it up into 10 partitions/tables with 10 columns each (plus pk of course).
    Horizontal partitioning means taking a table like a year's invoices, and splitting that into 365 daily partitions, where each partition contains the invoice rows of for a specific day of the year.
    There is also other partitioning concepts like application partitioning, network partitioning (which happens to be a bad thing) and so on.
    I have three questions are
    1- is data partitioning used of in networks or can be in one PC?Neither. Database partitioning happens inside the database. The database uses different structures for storing data. It used hash tables, index organised tables, B+trees, bitmap indexes and so on. Partitioning is just another type of data structure, used by the database, for storing and managing data.
    2- the data partitioning divided data that in table to partitions (groups) , according what ??? is to quantity or meaning of data that inside table?That depends on the type of partitioning used. Oracle supports list, hash and range partitions. Each of these uses a different algorithm to determine what data goes into which partition. Each of these satisfy different partitioning requirements.
    3- Is clustering that can be execute by Oracle using CTX_CLS.CLUSTERING type of it or partitioning not related to it?Partitioning is a physical data structure and transparent to the user and application. The user and app see table EMPLOYEES. They do not need to know or understand the physical structure of the table. The table can be a hash table, an index organised table, a hash partitioned table - and the user and app will not even know that. It does not change the way the user and app uses the table, it does not make a difference to their SQL statements on the table.
    CTX_CLS is an interface for classifying text data/documents. It uses its own data structures to do that. It calls a collection of related text data, a cluster.
    This has nothing to do with the Oracle Partitioning Option feature.

  • Using partition in real time

    Hi i am new to essbase
    How we can use replicate and transperent partition option in the real time.
    kindly help me in understanding this concept.

    Hi,
    We will use transparent partition as an interface to read the data from the source cube
    A transparent partition allows users to manipulate their data that is stored in a target database as if it were part of the actual source database. The remote data is retrieved from the source database each time the users of the target database request it. Write backs to the target database also flow through back to the source database.
    We can use this technique for the improved maintenance of a large data bases by splitting them into smaller databases to allow the operations on the smaller databases and stiil facilitate the users as they see in a single data base.
    We can use this for the improved security maintenance.
    A replicated database partition copies a portion of the source database to be stored in a target database. Users can access the target database as if it were the source. The database administrator must occasionally refresh the target database from the source database.
    We can use this technique as a means of transferring data between the essbase cubes.
    I will explain you a classic example.
    In the current reporting cube we will maintain only two years data - previous year and current year data.
    As part of year end activity, we need to move the previous year data to the History cube and the same to be deleted from the current cube.
    we will create a replicated partition for the previous year b/w current and History cube.
    Replicate the data from current to History cube.
    drop the relicated partition and clear the prev year data in the current cube.
    I hope this will suffice.

  • Any Maxl Script to Export and import of Partition

    Hi All,
    Is there any maxl or Esscmd command which will do an Export or an import of the partition.
    Regards,
    Krishna.

    Hi Krishna,
    here ,there are 2 things.
    Firstly, creation of partition( which is possible through MAXL scripts).Once the partition is made, an XML is created(which is what garycris was mentioning in his post).
    Goto any application -> database -> partitions ->right click here , you see "import partition",and the file nature is of XML.When you look at this window , you will undestand .I am not sure of the same functinoality in MAXL.
    Hope this info helps
    Sandeep Reddy Enti
    HCC

  • Physical Partitions in SAP BW

    Hi Experts,
    Our current implementation of SAP BW is on Db2 UDB  (the system is technically upgraded to 7.0 but the objects are still based on BW 3.5)
    Database DB6
    Operating System AIX
    We are trying to find if physical partitioning is possible for infocubes,DSO, datasources.
    Is it done by the Basis and DB team or Db2 takes care of partitioning by itself.
    My apologies, if it has been already posted, I am a part of BW team, so not sure of this question.
    Please Suggest. Any docs/notes are appreciated
    Thanks
    El

    Hi El,
    we support two kinds of partitioning in SAP NetWeaver BW on DB2:
    1. Hash partitioning with the DB2 Database Partitioning Feature: tables are distributed over several database partitions based on a hash key. Performance of complex queries is improved because the queries are executed in parallel on the database partitions. Maintenance operations like index built, statistics collection, data deletion is also improved by execting in parallel.
    2. Multi-dimensional clustering (MDC): The table data is clustered along one or more columns. This speeds up queries which restrict on the clustering columns as well as large delete operations (for example, deletion of InfoPackages from PSA tables).
    You find details about these options and how to implement them in our administration guide on the SAP Service Marketplace (http://service.sap.com/instguides: Folder SAP NetWeaver - SAP NetWeaver 7.0 (2004s) - Operations: Database Specific Guides: "SAP NW 7.0 or higher Admin Guide: DB2 for LUW").
    Best regards,
    Brigitte Bläser
    SAP on DB2 for Linux, UNIX, and Windows Development

  • Database creation problems and parition consolidation

    This is my first time on here so please look past my green-ness. =)
    I Installed Oracle 8.1.7 successfully and opted to setup the database creation later so I could put the database on another partition. Well, I tried once to create the database on the partition I installed oracle onto (/oracle) but I had insufficient space, It said I needed 1.3 gigs. My "/database" partition is only a 1 gig parition (and /oracle is 2 gigs). My plan was of course, to seperate them.
    Ok, I didn't see an option to create the db on another partition so I resigned to using /oracle. My question now is, is the 960 megs on /oracle enough to create the database? I used the custom setup so I could see how everything was broken down and I noticed that my rollback segment is 500 megs. Is this necessary? Can I get by with 100 megs and be fine? Also, if I did do this, I'd have approximately 100 megs free for data, this is enough I hope??
    Lastly, I need to combine the partitions somehow without effecting /. The hard drive is broken down into:
    gigs
    5.5 /
    2.0 /oracle
    1.0 /database
    Does anyone have any ideas? In windows, I used partiton magic but I don't know if there is a linux version.
    One more thought (last one for real =), it had 50 megs allocated for it's shared memory pool. I'm assuming this will take away from the 128megs I have thus effectively making only 78 available to X, will this cause any noticeable problems other than a bit of slowdown? I have most if not all of my virtual memory free at all times. I know I should get more ram, but you know how the budget thing goes...
    Any help would be greatly appreciated,
    Rob

    Try running the catalog and catproc sql scripts.
    $ cd $ORACLE_HOME/rdbms/lib
    $ svrmgrl
    connect internal
    @catalog
    @catproc
    That will set up all of the internal views and packages that
    oracle needs to do DDL.
    Scott
    Don DeLuca (guest) wrote:
    : I'm gotten through the install and created the database. The
    : database comes up, I can do some simple stuff like create a new
    : user, create a simple table but when I try to run SQL DDL
    : statements I get errors saying: "recursive SQL error". I
    : looked at the sql.log file after the install. It doesn't look
    : like all the packages were created successfully. Have others
    had
    : similiar problems with the system table/package creation? To
    me
    : it looks like a DDL that was needed was not called during the
    : install and later caused problems during package creation.
    : This is my fourth time installing and I have the same problem
    : each time. I also tried to run the plsql.sql file to recreate
    : some of the packages but this did not work either.
    : I've just about given up and I'm starting to realize why its
    : free.
    null

  • How to stop BDB from Mapping Database Files?

    We have a problem where the physical memory on Windows (NT Kernel 6 and up, i.e. Windows 7, 2008R2, etc.) gets maxed out after some time when running our application.  On an 8GB machine, if you look at our process loading BDB, its only around 1GB. But, when looking at the memory using RAMMAP, you can see that the BDB database files (not the shared region files) are being mapped into memory and that is where most of the memory consumption is taking place.  I wouldn't care normally, as memory mapping can have performance and usability benefits. But the results are the system comes to a screeching halt.   This happens when we are inserting results in high order, e.g. 10s of millions of records in a short time frame.
    I would attach a picture to this post, but for some reason the insert image is greyed out.
    Environment open flags: DB_CREATE | DB_INIT_LOCK | DB_INIT_LOG | DB_INIT_TXN | DB_INIT_MPOOL | DB_THREAD | DB_LOCKDOWN | DB_RECOVER
    Database open flags: DB_CREATE | DB_AUTO_COMMIT

    An update for the community
    Cause
    We opened a support request (SR) to work with Oracle on the matter. The conclusion we came to was that the main reason for the memory consumption was the Windows System Cache.  (For reference, see this http://support.microsoft.com/kb/976618) When opening files in buffered mode, the equivalent of calling CreateFile without specifying FILE_FLAG_NO_BUFFERING, all I/O to a file goes through the Windows System Cache.  The larger the database file, the more memory is used to back it.  This is not the same as memory mapped files, of which Berkeley will use for the region files (i.e. the environment.) Those also use memory, but because they are bounded in size, will not cause an issue (e.g. need a bigger environment, just add more memory.)  The obvious reason to use the cache is for performance optimizations, particularly in read-heavy workloads. 
    The drawback, however, is that when there is a significant amount of I/O in a short amount of time, that cache can get really full and can result in the physical memory being close to 100% used.  This has negative affects on the entire system. 
    Time is important, because Windows needs time to transition active pages to standby pages which decreases the amount of physical memory.   What we found is that when our DB was installed on FLASH disk, we could generate a lot more I/O and our tests could run in a fraction of the time, but the memory would get close to 100%. If we ran those same tests on slower disk, while the result was the same, i.e. inserted 10 million records into the data, the time takes a lot long and the memory utilization does not approach even close to 100%. Note that we also see the memory consumption happen when we utilize the hotbackup in the BDB library. The reason for this is obvious:  In a short amount of time we are reading the entire BDB database file which makes Windows utilize the system cache for it. Total amount of memory might be a factor as well. On a system with 16GB of memory, even with FLASH disk, we had a hard time reproducing the issue where the memory climbs.
    There is no Windows API that allows an application to control how much system cache is reserved or usable or maximum for an individual file.  Therefore, BDB does not have fine grained control of this behavior on an individual file basis.  BDB can only turn on or off buffering in total for a given file.
    Workaround
    In Berkeley, you can turn off buffered I/O in Windows by specifying the DB_DIRECT_DB flag to the environment.  This is the equivalent of calling CreateFile with specifying FILE_FLAG_NO_BUFFERING.  All I/O goes straight to the disk instead of memory and all I/O must be aligned to a multiple of the underlying disk sector size. (NTFS sector size is generally 512 or 4096 bytes and normal BDB page sizes are generally multiples of that so for most this shouldn't be a concern, but know that Berkeley will test that page size to ensure it is compatible and if not it will silently disable DB_DIRECT_DB.)  What we found in our testing is that using the DB_DIRECT_DB flag had too much of a negative affect on performance with anything but FLASH disk and therefore can not use it. We may consider it acceptable for FLASH environments where we generate significant I/O in short time periods.   We could not reproduce the memory affect when the database was hosted on a SAN disk running 15K SAS which is more typical and therefore are closing the SR.
    However, Windows does have an API that controls the total system wide amount of system cache space to use and we may experiment with this setting. Please see this http://support.microsoft.com/kb/976618 We are also going to experiment with using multiple database partitions so that Berkeley spreads the load to those other files possibly giving the system cache time to move active pages to standby.

  • ACI Setup - How to Configure Data Warehouse Database - Partitoning

    After reading the ACI Install Guide & Data Warehouse documentation, I have some questions regarding how to setup the database:
    - Should database partitioning be setup? If so, what tables should be partitioned and what should they be partitioned by?
    - Are there any other best practices or tips for setting up & tuning the database?
    We are trying to avoid the (painful) situation of having to add partitioning later on; it is much easier to add it up front (if done correctly up front).
    Thanks in advance for any advice!

    On the tables recommended for partitioning, the partition key is nullable. If ATG inserts a null value into the timestamp column of one of the partitioned tables, we'll receive an ORA-14300 or ORA-14440 error. Oracle isn't able to figure out what partition to map that record to.
    Can the columns be changed to NOT NULL? Or, can the application guarantee a nullable value won't be inserted?
    Here are some example columns:
    ARF_SITE_VISIT.START_VISIT_TIMESTAMP --> TIMESTAMP(6) null
    ARF_REGISTRATION.REGISTRATION_TIMESTAMP --> TIMESTAMP(6) null
    ARF_LINE_ITEM.SUBMIT_TIMESTAMP --> TIMESTAMP(6) null
    ARF_PROMOTION_USAGE.USAGE_TIMESTAMP --> TIMESTAMP(6) null
    ARF_RETURN_ITEM.SUBMIT_TIMESTAMP --> TIMESTAMP(6) null
    Thanks

  • SQL1013N  The database alias name or database name "FCS" could not be found

    Hi All,
    Solution manager is running on win2003 server with db2 database.When i m trying to connect db2 from command promt i m getting error.I m following these step.
    1.Loging with db2 user
    2.In command promt when i m tring to check the cfg file getting error.
    DB2 get db cfg for fcs.
    SQL1013N  The database alias name or database name "FCS" could not be found.
    SQLSTATE=42705
    I m pasting the snapshot of of database manager.
    C:\>db2 get snapshot for dbm
                Database Manager Snapshot
    Node name                                      =
    Node type                                      = Enterprise Server Edition with
    local and remote clients
    Instance name                                  = DB2
    Number of database partitions in DB2 instance  = 1
    Database manager status                        = Active
    Product name                                   = DB2 v9.1.600.703
    Service level                                  = s081007 (WR21415)
    Private Sort heap allocated                    = 0
    Private Sort heap high water mark        = 0
    Post threshold sorts                                = Not Collected
    Piped sorts requested                          = 0
    Piped sorts accepted                           = 0
    Start Database Manager timestamp      = 07/29/2010 11:35:25.534766
    Last reset timestamp                           =
    Snapshot timestamp                             = 08/03/2010 18:43:14.492590
    Remote connections to db manager               = 0
    Remote connections executing in db manager     = 0
    Local connections                              = 0
    Local connections executing in db manager      = 0
    Active local databases                         = 0
    High water mark for agents registered          = 2
    High water mark for agents waiting for a token = 0
    Agents registered                              = 2
    Agents waiting for a token                     = 0
    Idle agents                                    = 0
    Committed private Memory (Bytes)               = 11141120
    Switch list for db partition number 0
    Buffer Pool Activity Information  (BUFFERPOOL) = OFF
    Lock Information                        (LOCK) = OFF
    Sorting Information                     (SORT) = OFF
    SQL Statement Information          (STATEMENT) = OFF
    Table Activity Information             (TABLE) = OFF
    Take Timestamp Information         (TIMESTAMP) = ON  07/29/2010 11:35:25.534766
    Unit of Work Information                 (UOW) = OFF
    Agents assigned from pool                      = 2542
    Agents created from empty pool                 = 4
    Agents stolen from another application         = 0
    High water mark for coordinating agents        = 2
    Max agents overflow                            = 0
    Hash joins after heap threshold exceeded       = 0
    Total number of gateway connections            = 0
    Current number of gateway connections          = 0
    Gateway connections waiting for host reply     = 0
    Gateway connections waiting for client request = 0
    Gateway connection pool agents stolen          = 0
    Memory usage for database manager:
        Memory Pool Type                           = Database Monitor Heap
           Current size (bytes)                    = 65536
           High water mark (bytes)                 = 65536
           Configured size (bytes)                 = 327680
        Memory Pool Type                           = Other Memory
           Current size (bytes)                    = 10092544
           High water mark (bytes)                 = 10092544
           Configured size (bytes)                 = 4292870144
    KIndly suggest me where i m getting wrong.
    REgards

    Dear Sir,
           Please check the DB2DART log
    Command line output:
    C:\Users\Administrator.MAXXMOBILEDLH>db2dart db2smn /CHST /WHAT DBBP OFF
                        DB2DART Processing completed with error!
                            Complete DB2DART report found in:
    C:\PROGRAMDATA\IBM\DB2\DB2COPY1\DB2\DART0000\DB2SMN.RPT
    db2smn.rpt:
    Error: Failed sqledosd API on open system database directory.
    SQL1057W  The system database directory is empty.
    Error: This phase encountered an error and did not complete.
                        DB2DART Processing completed with error!
                                      WARNING:                       
                        The inspection phase did not complete!        
                            Complete DB2DART report found in:
    C:\PROGRAMDATA\IBM\DB2\DB2COPY1\DB2\DART0000\DB2SMN.RPT
        _______    D A R T    P R O C E S S I N G    C O M P L E T E    _______"
    Please help me to sort out this issue!!
    Our Solution Manager is down.
    Regards!!!

Maybe you are looking for