Reorganizing database

Hi All,
I was going through various discussions related DB reorg.
Re: Database Reorg.
reorganization of DB
I believe the concept of reorganizing database is almost like defragmenting freespace in Windows systems.
If it is not done since a long time will it affect the performance since the blocks are scattered all over the datafiles (due to DMLs)? Any method by which we can identify whether a DB is candidate for doing reorganization?
I didn’t face this situation. Just for being aware of this I am enquiring this.
~Thanks
Oracle is Oracle 10.2.0.2

OK,Thnx for the clarification,
But still...
If as a result of reorg free extents will come
together and new insert etc operations can make use
of that free space instead of extending datafile (Not
in all cases).
Please clarify...If you are using Locally Managed Tablespaces (LMTs) all free space is always usable. The idea of coalescing free space to get a usable extent is a holdover from Dictionary Managed Tables.
>
I think SAP applications recommends doing reorg with
help of tool know as SAPDBA which internally uses
exp/imp.
I don't have any personal experience with SAP, but other commercial vendors I've worked with (as well as most in-house developers) are notorious for their lack of understanding of how the database works, and so depend heavily on long-obsolete rules and concepts.
~Thanks

Similar Messages

  • Are there performancebenefits to reorganizing database-using export/import?

    I have a production database using Oracle 9.2.0.5, which has been running for last 3 years since it was upgraded from 8.1.7. At that time we had done full export of 8.1.7 database and then created 9.2. instance and then imported all the application schemas.
    Load on our database has been increasing and there are constant pressures from management to improve performance. We have looked at indexes many times, have lots of memory for SGA and have tuned various init.ora parameters. Being a third party packages, we cannot rewrite queries.
    Application is a mix of OLTP and reporting, it is definitely more read than write.
    Are there any benefits to reorganize database using export/import, i.e., we will do a full export of existing database and then delete all objects from application schemas and do schema imports. We will run the dbms_Stats again to recomputed statistics. Of course, we will test all of that in a test environment before making change sin production.
    I have heard different views on reorganization. Some people say it is useless, some people say it can improve performance since data will be placed homely in fewer blocks.
    Appreciate your feedback.

    Hi,
    Oracle gave us reorg utilities (dbms_redefinition) because Oracle does not do real-time reorganization for performamnce reasons.
    In some applications, reorgs are critical to high-performance, while in others, it may make no difference.
    Remember, a reorg simply puts the indexes and tables into their "optimal" pristine state.
    The most striking benefit of table reorgs is when a "sparse" table experiences lots of full scans. After the reorg, response time can be cut in half.
    Also, in cases where related rows are queried together, a reorg with row-resequencing (like 10g sorted hash clusters) make a bif difference:
    http://www.dba-oracle.com/t_table_row_resequencing.htm
    But like I said, it depends on many factors . . .
    Hope this helps . . .
    Donald K. Burleson
    Oracle Press author
    Author of "Oracle Tuning: The Definitive Reference"
    http://www.rampant-books.com/book_2005_1_awr_proactive_tuning.htm

  • "There was an error opening the database for the library "~/Pictures/Apert"

    Hey everyone,
    There's already another discussion thread about this at the link below, but the problem was never actually resolved. http://discussions.apple.com/thread.jspa?threadID=2343014&tstart=0&messageID=116 67261#11667261
    So I stumbled across this problem where Aperture 3 will display the following error when opening the image library.
    There was an error opening the database for the library “~[path]/Aperture Library.aplibrary”.
    One user suggested navigating to the actual library, and ⌘⌥-clicking the library to open it with the Aperture Library First Aid dialog. This however, did not work for me. I still received the error messages upon ⌘⌥-clicking the library.
    Here's a little hack I discovered for those of you still struggling. It doesn't fix the problem (leaves quite a large dent remaining actually), but hopefully someone can cultivate what little I discovered.
    ⌥-click Aperture (not the library)
    You should get to the library selection dialog box. Create a new library (bottom right corner) and remember the location of the new library. Put a few random pictures in the new one.
    Close Aperture
    ⌃-click both the old and new libraries and "Show Package Contents."
    Open the folder "Database" in both libraries.
    Cross check the folders to see if the old library is missing files that are in the new library. If your old library is missing some files (mine was), drag the missing files from the new library to the old one.
    Close both folders.
    ⌘⌥-click the old library.
    NOW, I got the Aperture Library First Aid dialog.
    Rebuild the library (3rd option)
    Wait a while.
    Aperture should open up with a folder named "Recovered Folder." Within this folder will be a SINGLE project named "Recovered Project" with ALL the images from your old library. There may be a few albums there too, but mine were all empty so it didn't matter in my case. To my knowledge, I lost all the adjustments I performed too.
    This is as far as I got. I have much of my library backed up, so I only needed to copy the most recent photos into the backup. I am NOT reorganizing and re-editing pictures in this one!
    As Aperture is a photo-organizing and photo-editing software, I understand that this by no means fixes the problem, since you lose all organization and editing. But it enables one to at least view the images lost. Hopefully a more knowledgable hacker else can figure out how to fix this completely. Unfortunately, I simply don't have the time to continue messing with this.
    Good luck to anyone else having issues! I understand the frustration you must be facing, its freaky having years worth of images simply disappear! After this incident, I'm definitely going to configure Time Machine to perform more frequent backups, since my latest backup was already a couple weeks old.

    Hey All,
    To my horror I have a similar problem and your solution didn't seem to help. To be specific, I get the error:
    "There was an error opening the database for the library “~/Pictures/Aperture Library_2.aplibrary" Everytime I try to run Aperture. This happened because my Apple computer shut off completely (due to bad battery) while importing photos from a memory card.
    I've tried "option + command" while clicking the aperture application and that gave me the same error message without any other option except to quit the program. I tried "option + command" and clicked the library package but that didn't work either. I tried creating a new aperture library, opening it which I did successfully, and then switching to my main one but that didn't work.
    Does anyone know away to get it to work again and have the edits still there?
    I'm a professional photographer and have thousands of photos or so in the library so this is distressing. The master files for the photos are located on an external hard drive, but I'm not sure how to access them in aperture. Are all the edits on the photographs are in the external hard drive or the aperture library. Is there any other way to resolve this issue without losing the edits?
    Does apple have a support number I can call? Any help would be greatly appreciated!!

  • Clearing out logs on filesystem and database

    We've got SOA Suite up and running well enough, but I was wondering how to control the various log levels and more importantly how to delete old logs for the various components.
    I assume that some logs are stored in the file system and some in the underlying SOA support database. Our test support database orabpel and users tablespaces have grown to 4.5 gig and 2 gig respectively. Clearly something is being logged in the database. Also, I can see entire soap messages looking at the SOA console app (can't remember which one) and I need to turn that logging down because our SOAP messages contain proprietary data that needs to be encrypted while at rest.
    I've looked at some of the documentation and googled too. I must be dense or something because I'm not finding much. I did find a procedure to delete some logs on the file system, but it required a shutdown of one of the services. That can't be right.
    What's the proper procedure for getting rid of old data in the database, file system logs and tuning down the content logged in the various SOA components?
    Anyone have a pointer to the docs or a a how-to?
    Thanks

    OK nothing like a little crisis get the mind working.
    First, our policy set had logging turned on at the "envelope" level. The Oracle consultants who helped set this up didn't stress the logging pieces of the policy. Anyway I just disabled the logging steps in the policy definition, and committed.
    Next I purged the logs using OWSM console Operational Management>Overall Statistics>Message Logs>Purge Message Logs.
    Finally, I went into grid control and "reorganized the 4.5 gigabyte log_objects table which for some reason still had 10 rows after repeated purges.
    My log_objects table is now 0.13 mb and NOT growing because I've turned off logging.
    In my defense this is our first SOA implementation and we didn't get a lot of operational "knowledge transfer" from our consultants. Regardless, I'm a big dufus for not figuring this out before.
    Hope this helps someone else in the future.

  • Performance problem in 9.2.0 database

    Hello All,
    Database : 9.2.0.7
    Os : windows 2003 sevrer standdard edition
    RAM 4 Gigs
    The buffer cache hit ratio in this server is around 83%, where it normaly was around 98% before i did some maintenance activities.
    I have done some maintenance activities in January on this database.
    Maintenance activties includes below steps
    1.In production i have deleted old data in the production tables
    2.Reorganized tablespaces,tables
    3.Rebuild indexes for those tables.
    4. At last collected statistics for those tables.
    Now after this activity the buffer cache hit ratio is very low.
    Any one please advice on how to increase the hit ratio.
    TIA,

    ORCLDB wrote:
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~                                                     % Total
    Event                                               Waits    Time (s) Ela Time
    CPU time                                                        7,029    55.87
    PX Deq: Signal ACK                                 77,056       1,950    15.50
    db file sequential read                         6,119,051       1,302    10.35
    db file scattered read                          3,544,645       1,054     8.38
    log file sync                                     127,427         551     4.38
    How long was the snapshot interval ?
    How many CPU in the machine ?
    How many users to you have to support ?
    Is your operating system 32 bit or 64 bit ?
    Is this an OLTP system or a datawarehouse / DSS type of system ?
    First thoughts
    You probably have some changes in execution plans because you made some objects smaller and more densely packed - leading Oracle to think that tablescans and index fast full scans would be efficient. (This is a GUESS base on the large number of "db file scattered read" and the assumption that this is NOT a data warehouse).
    You probably have a large amount of spare RAM (your machine has 4GB, your cache is 300MB) and should increase the size of the cache. (This is a guess based on the fact that your db file reads are averaging less than 0.3 milliseconds so are almost sure to be coming out of a local filesystem cache.) The change may decrease the amount of CPU you are using, because all that copying between caches will be using CPU.
    The parallel execution probably needs to be stopped (based on your comments to another user about rebuilding indexes in parallel) because this can result in lots of tablescans and index fast full scans - which aren't necessarily going to use direct path reads because Oracle may be doing tablescans that go "serial to parallel". Set the suspect indexes back to "no parallel".
    For confirmation about where the time goes - check the "SQL Ordered by Reads" section of the report and look at the execution paths of the top two or three SQL statements. (Note: if you set statspack to take snapshots at level 6, you can usually get the actual execution plans of the top SQL by running the srcipt $ORACLE_HOME/rdbms/admin/sqrepsql.sql
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    A general reminder about "Forum Etiquette / Reward Points": http://forums.oracle.com/forums/ann.jspa?annID=718
    If you never mark your questions as answered people will eventually decide that it's not worth trying to answer you because they will never know whether or not their answer has been of any use, or whether you even bothered to read it.
    It is also important to mark answers that you thought helpful - again it lets other people know that you appreciate their help, but it also acts as a pointer for other people when they are researching the same question, moreover it means that when you mark a bad or wrong answer as helpful someone may be prompted to tell you (and the rest of the forum) what's so bad or wrong about the answer you found helpful.

  • Reorganization of Tables in EBS Database

    Did anyone reorganized the tables in EBS database to reclaim the fragmented space within the Tablespace. If so please suggest how to identify which are all the tables have more fragmentation in order to reorganize it within the tablespace.
    Regards,
    Subramaniam PL

    DUPLICATE
    do NOT cross/multi-post!
    Reorganization of Tables in Database

  • Reorganization of Tables in Database

    Did anyone reorganized the tables in database to reclaim the fragmented space within the Tablespace. If so please suggest how to identify which are all the tables have more fragmentation in order to reorganize it within the tablespace.
    Regards,
    Subramaniam PL

    Its Locally Managed only But if I run the following query asper the note id 1019709.6 it shows fragmentation in few tablespaces.
    select
    total.tablespace_name tsname,
    count(free.bytes) nfrags,
    nvl(max(free.bytes)/1024,0) mxfrag,
    total.bytes/1024 totsiz,
    nvl(sum(free.bytes)/1024,0) avasiz,
    (1-nvl(sum(free.bytes),0)/total.bytes)*100 pctusd
    from
    dba_data_files total,
    dba_free_space free
    where
    total.tablespace_name = free.tablespace_name(+)
    and total.file_id=free.file_id(+)
    group by
    total.tablespace_name,
    total.bytes;
    Tablespace Name : APPS_TS_TX_DATA
    Fragmentation : 2GB
    I wanted to reclaim this fragmented space.
    Regards,
    Subramaniam PL

  • SharePoint 2010 Database maintenance for beginners / developers

    Hey,
    here is a thing which I concern myself with from time to time.
    As a fulltime developer for SharePoint, I'm not quite familiar with maintaining it's databases. So after I set up my dev environment I usually don't care about maintaining the system. The logical consequence is that the system begins to slow down after a
    while.
    I don't have the time (and - to be honest - not the muse) to deep dive into the recommended maintaing plans by microsoft but I want to keep the performance of my system on an acceptable level. So I read the best practices provided by microsoft about index
    defragmentation, reorganizing, shrinking and so on but for me it's quite difficult to fully dive into these things without spending to much of my business time.
    So I wander if there are any execution-ready T-SQL scripts or something like that, which I can run from time to time to prevent my databases from to high fragmentation. I found some scripts for a stored procedure which reorganizes a particular index. But
    what I am thinking about is a script which iterates through all databases, all tables and all indexes and reorganizes them.
    I'm not looking for the perfect mainenance plan to achieve the best possible performance - I just want to prevent my dev environment from slowing down to much.
    Are there any scripts like that out there? or do you have other tips for me?
    Thank you!

    Hi,
    The steps to configure the maintenance plan is very easy and wont take more than 10 minutes. It is one time activity. It is nothing but drag and drag drop the required tasks from the Tool bar. You can schedule the maintenance plan to run on specific
    time or can run directly on demand basis.
    http://social.technet.microsoft.com/wiki/contents/articles/13956.sharepoint-2010-how-to-create-a-sql-server-2008-r2-maintence-plan-for-sharepoint.aspx 
    If you are looking only for SQL scripts then you may need to post in SQL forum, because maintenance plans will be almost similar for SharePoint Databases/or any SQL DBs.
    Please remember to mark your question as answered &Vote helpful, if this solves/helps your problem. ******************************************************************************************
    Best Regards,
    Pavan Kumar Sapara
    s p kumar

  • XI Database problem

    Hi All,
    I have strange situation in XI Development, My server guys told that Database is full . Tables are
    SAPXD2   SYS_LOB0000046998C00020$$    LOBSEGMENT   PSAPXD2DB         83.062.784    10382.848     1.454      1-           0
    SAPDAT   SXMSCLUR                     TABLE        PSAPDAT           48.761.856    6.095.232       864            1-           0
    SAPDAT   SXMSCLUP                     TABLE        PSAPDAT           21.303.296    2.662.912       440            1-           0
    Yesterday i checked the RWB---> ADAPTER Engine level and I seleted this year "all containing errors"  I found more than 10000 error messages, I cancel all the messages in Adapter engine level. and i deleted the all cancel messages manually using this this note: 807615.
    But still they are not find the free space in Database level.
    And also i checked RWB--->  Integration Engine level for this year "alll containing errors" I found more than 10000 system error messages, I am n't able to cancel these messages.
    How i can delete these error messages from Integration engine,
    My retention period is 20 days, I follow the maximum notes.
    Please suggest me how we can solve this situation , Thank you very much.
    Regards,
    Sateesh

    Good Morning All,
    I Check the report  RSXMB_SHOW_REORG_STATUS,
    I found the results are:
    No of Messages in DB: 910,616
    No of Messages in Client : 910,616
    No of messsage in client  still to be reorganized: 910,616
    Archiving status
    Msg not in retension period (can be archived) : 2065
    Delete
    Asynchronouw messages not in retention period (can be deleted)-- 9,08012
    Asynchronous messages in retention period(cannot be deleted)-- 366
    Asynchronous messages without errors to be flagged -786
    Synchr. msgs in retension period (cannot be deleted)- 171
    How i can understand this situation, Kindly guide me.
    Thank you very much
    Sateesh

  • Where is my EXPORT dump file ? How to do BRTools off line table reorgan ?

    Hello,
    Here was my BRTools Export output message :
    About to export specified tables via Direct Path ...
    Current user changed to SAPPRD
    . . exporting table                          CDHDR   60357362 rows exported
    Export terminated successfully without warnings.
    BR0280I BRSPACE time stamp: 2011-08-09 16.53.04
    BR1160I 1 table exported by EXP utility
    BR0280I BRSPACE time stamp: 2011-08-09 16.53.04
    BR0670I Enter 'c[ont]' to continue, 'b[ack]' to go back, 's[top]' to abort:
    But where is my Export dump file ?
    Besides, could you tell me the procedures of using BRTools Export & Import to do off-line table reorgan ?
    Due to some reasons, I can't do online table reorgan.

    Hi Ming,
    I noted the OSS note 646681 - Reorganizing tables with BRSPACE in my previous message, already. You don't need any additional effort such as creating script, if you use brspace as I noted in my last message.
    Stop the SAP system, but not database then execute the command, below;
    1) brspace -u / -f tbreorg -t "CDHDR"
    2) At the incoming menu steps, select --> (8 - Table reorganization mode (mode)) .. offline
    3) Then continue with "c"
    At the end of this steps, table and related indexes will be reorganized.
    You can update statistics by using the command, below after you complete the steps, above respectfully;
    brconnect -u / -c -f stats -t all -f collect -p 4
    Best regards,
    Orkun Gedik

  • Performance benefits from database export import

    Hi guru,
    I have SAP R3 47.X110 WIN/SQL2005.
    I want to improve performance by a sort of database reorganization.
    I read this note  Note 159316 - Reorganizing tables on SQL Server SQL 2000/2005
    but , there the method descrived is not much quick to do.
    The question is : there is lot performance's benefits from make an export of my instance then re-import , according system copy procedure?
    Or there is another quick way to reorganize database?
    Thank you very much for answer.
    Antonio.

    You want to do an export/import or a reorganization to get more performance. The question is: does it make sense? The given Microsoft article and the mentioned notes describe pretty well, in which cases it makes sense and in which not.
    I wouldn´t start with the last step (reorganization) but I would more try to find out, where the real problems are. Checking the I/O system throughput using performance monitor (perfmon.exe) on Windows, checking long running statements using ST05 and finding out if you have a real index fragmentation using the given DBCC procedures.
    If you come to the conclusion then, that a reorganization makes sense, then I would do it.
    And as stated in the SAP note, if it would become faster after such a task, the underlying I/O subsystem is not fast enough to serve all the requests.
    There´s no general statement as "if you do reorganization using method A or method B it will be faster" because the reasons for a slow performance can vary.
    Markus

  • Snap_no_new_entry ABAP dump coming while ST22 is reorganized regularily

    Dear Friends,
    snap_no_new_entry ABAP dump comes and hamper all activities while ST22 is reorganized daily. After restarting the server, It is working fine for a day only. Can you suggest in this regard to fix this problem permanently?
    System Environment : SAP ECC 6.0, AIX, DB2
    Table spaces :
    Tablespace
      Name
    TS Type
    KB Total
    Percent Used
    KB Free
    Page Size (KB)
    High-Water Mark
      (KB)
    No. Containers
    Contents
    TS State
    AUTORESIZE
    PSAPTEMP16
    SMS
    64
    100.00
    0
    16
    0
    4
    Temporary objects
    Normal
    NO
    SYSTOOLSTMPSPACE
    SMS
    64
    100.00
    0
    16
    0
    4
    Temporary objects
    Normal
    NO
    ECP#BTABD
    DMS
    6422528
    99.94
    3680
    16
    6418720
    4
    Large objects
    Normal
    YES
    ECP#ES702DX
    DMS
    35586048
    99.94
    21888
    16
    35564032
    4
    Large objects
    Normal
    YES
    ECP#EL702DX
    DMS
    18972672
    99.93
    13632
    16
    18958912
    4
    Large objects
    Normal
    YES
    SYSCATSPACE
    DMS
    2785280
    99.67
    9216
    16
    2775936
    4
    Regular data
    Normal
    YES
    ECP#ES702IX
    DMS
    7471104
    99.58
    31456
    16
    7439520
    4
    Large objects
    Normal
    YES
    ECP#STABD
    DMS
    3637248
    99.58
    15424
    16
    3621696
    4
    Large objects
    Normal
    YES
    ECP#POOLD
    DMS
    6258688
    99.54
    28832
    16
    6252608
    4
    Large objects
    Normal
    YES
    ECP#POOLI
    DMS
    3637248
    99.47
    19200
    16
    3617920
    4
    Large objects
    Normal
    YES
    SAPTOOLS
    DMS
    3053056
    99.14
    25120
    16
    3027808
    4
    Large objects
    Normal
    YES
    ECP#PROTD
    DMS
    851968
    98.94
    9056
    16
    842784
    4
    Large objects
    Normal
    YES
    ECP#BTABI
    DMS
    2457600
    98.81
    29344
    16
    2428128
    4
    Large objects
    Normal
    YES
    ECP#CLUD
    DMS
    491520
    98.75
    6144
    16
    485248
    4
    Large objects
    Normal
    YES
    ECP#STABI
    DMS
    2424832
    96.81
    77280
    16
    2400160
    4
    Large objects
    Normal
    YES
    ECP#SOURCED
    DMS
    458752
    95.02
    22848
    16
    435776
    4
    Large objects
    Normal
    YES
    ECP#SOURCEI
    DMS
    458752
    93.34
    30528
    16
    428096
    4
    Large objects
    Normal
    YES
    SYSTOOLSPACE
    DMS
    229376
    92.70
    16736
    16
    212512
    4
    Large objects
    Normal
    YES
    ECP#EL702IX
    DMS
    98304
    84.29
    15424
    16
    82752
    4
    Large objects
    Normal
    YES
    ECP#LOADD
    DMS
    131072
    79.86
    26368
    16
    104576
    4
    Large objects
    Normal
    YES
    ECP#PROTI
    DMS
    131072
    77.57
    29376
    16
    101568
    4
    Large objects
    Normal
    YES
    ECP#CLUI
    DMS
    65536
    63.31
    24000
    16
    41408
    4
    Large objects
    Normal
    YES
    ECP#DDICI
    DMS
    1015808
    63.13
    374464
    16
    986816
    4
    Large objects
    Normal
    YES
    ECP#DDICD
    DMS
    5439488
    61.23
    2108736
    16
    5436512
    4
    Large objects
    Normal
    YES
    ECP#DOCUD
    DMS
    65536
    51.91
    31456
    16
    53024
    4
    Large objects
    Normal
    YES
    ECP#LOADI
    DMS
    32768
    48.53
    16800
    16
    15840
    4
    Large objects
    Normal
    YES
    ECP#DOCUI
    DMS
    65536
    36.25
    41696
    16
    35744
    4
    Large objects
    Normal
    YES
    ECP#USER1D
    DMS
    32768
    30.69
    22624
    16
    10016
    4
    Large objects
    Normal
    YES
    ECP#USER1I
    DMS
    32768
    30.00
    22848
    16
    9792
    4
    Large objects
    Normal
    YES
    SAPEVENTMON
    DMS
    51200
    28.38
    36576
    16
    14496
    4
    Large objects
    Normal
    YES
    ECP#DIMD
    DMS
    32768
    25.29
    24384
    16
    8256
    4
    Large objects
    Normal
    YES
    ECP#DIMI
    DMS
    32768
    25.29
    24384
    16
    8256
    4
    Large objects
    Normal
    YES
    ECP#EL702I
    DMS
    32768
    25.29
    24384
    16
    8256
    4
    Large objects
    Normal
    YES
    ECP#FACTD
    DMS
    32768
    25.29
    24384
    16
    8256
    4
    Large objects
    Normal
    YES
    ECP#FACTI
    DMS
    32768
    25.29
    24384
    16
    8256
    4
    Large objects
    Normal
    YES
    ECP#ODSD
    DMS
    32768
    25.29
    24384
    16
    8256
    4
    Large objects
    Normal
    YES
    ECP#ODSI
    DMS
    32768
    25.29
    24384
    16
    8256
    4
    Large objects
    Normal
    YES
    ECP#ES702I
    DMS
    6651904
    0.12
    6643520
    16
    8256
    4
    Large objects
    Normal
    YES
    ECP#EL702D
    DMS
    17334272
    0.05
    17325888
    16
    8256
    4
    Large objects
    Normal
    YES
    ECP#ES702D
    DMS
    31391744
    0.03
    31383360
    16
    8256
    4
    Large objects
    Normal
    YES

    Hi Dharmendra,
    The error implies SNAP table is full in the SAP system and it cannot write new entries. Since you tried to perform reorganize using ST22 , I would request you to do the following
    1) Check whether Transaction log is full. If yes, then either take backup of transaction log or shrink the same, you can check db2diag.log for more details.
    Note : As per log if any process making the log to fill , kindly terminate it
    2) Restart SAP and database.
    3) Repeat the ST22 dump Re-organization process.
    Hope it helps
    BR Vaibhav

  • SWIFT - Update BIC Database plus

    Dear all,
    in our SAP-System we maintain an international bank directory.
    We used the Swift - BIC Database plus download for the initial input with the transaction BIC.
    The program compares the data from the input file with the data that already exists in the R/3 System. And this works only with the whole bank directory file and not with the delta-File due to if banks exist in the R/3 System (in table BNKA) that are not contained in the input file, these are considered to be no longer current. If the Set deletion flag parameter has been activated, these banks are flagged for deletion, and can then be reorganized.
    My question is, knows somebody a solution which I can get for importing only the delta-file ? Or is the only possibility to develop an own program ?
    Many thanks for your help.
    Ruth Trisl
    Senior Business Analyst / Deputy
    Novartis Pharma AG

    Hi,
    it's possible to upload the complete file a second time including the BIC/SWIFT codes.
    All master datas which includes the SWIFT won't be maintenanced or changed by
    SAP BIC upload report.
    All bank master datas including an deletion flag will lose the flag by the program.
    At least all existing banks will be updated by the SAP report.
    Best regards
    Tim Brickwedde
    www.cbs-consulting.de
    Edited by: Tim Brickwedde on Mar 26, 2009 1:12 PM

  • Read_Only and Read_Write File Groups in Same Database

    We have fairly static reference data in a database we have set to Read_Only for a number of reasons and it has worked well in that state. I am being asked to change that now so we can load data daily into this database. I am thinking about creating a read_write
    filegroup in the database to do this and still allow us to have the original tables in the database on a read_only filegroup. I am wondering what issues may occur with this approach, concerned about taking this highly read database to a read_write state and
    causing issues. It appears the primary data file can't be set to read only, so I would need to create the read_only filegroup and move all the existing data/tables to it that file group. Anyone have comments/experience along these lines?

    When a Filegroup is marked as Read-only, SQL Server will not bother with Page or Row locks on the tables or indexes contained in them. This reduces SQL Server overhead and improves performance. Since the data is not changing, index fragmentation does not
    occur so maintenance, such as rebuilding or reorganizing, is unnecessary. That saves time and effort also. Also,  in SQL Server 2008 and later, you can mark a Filegroup as Read-only without having exclusive access to the entire database....
    Raju Rasagounder Sr MSSQL DBA

  • PSAPSR3- Database Growth unusually

    Dear All
    in our SAP XI - Oracle 10g database, shows an unusuall grownt comparing to  lats cople of months .. if you see the following growth which i extracted through DB13 , comparing to the last severel months it shows that tablespace "PSAPSR3" has increased morethan 2 folds..
    can some one tell me some clue on how to find the reason for this ??
    Month     Size- Gb
    07.12.2009     1.663.104
    30.11.2009     4.484.352
    31.10.2009     2.926.848
    30.09.2009     1.888.512
    31.08.2009     1.370.816
    31.07.2009     1.152.832
    30.06.2009     1.636.864
    31.05.2009     1.240.640
    30.04.2009     1.163.520
    31.03.2009     1.382.080
    28.02.2009     1.151.616
    31.01.2009     1.037.632
    also i have attached the brconect log from DB13
    BR0970W Database administration alert - level: WARNING, type: CRITICAL_TABLESPACE, object:
    Regards
    Nawa

    Hi
    i ran the report RSXMB_SHOW_REORG_STATUS  ..following are the results ..does this something to do witht the growth of SAPSR3?
    Number of messages in DB:         765.770
    Number of messages in client:      765.770
    Number of messages in client still to be reorganized:        765.770
    also i have extracted the tables and LOb segmnents ( highets ) in SAPSR3 ..
    SEGMENT_NAME
    SEGMENT_TYPE               MB
    SWWCNTP0
    TABLE                   10778
    SYS_LOB0000024288C00008$$
    LOBSEGMENT              10148
    SWFRXICNT
    TABLE                    4620
    SEGMENT_NAME
    SEGMENT_TYPE               MB
    SXMSCLUP
    TABLE                    3919
    SXMSCLUR
    TABLE                    2504
    SYS_LOB0000024300C00009$$
    LOBSEGMENT               1478
    SYS_LOB0000024288C00008$$
    LOBSEGMENT              10148
    can i delete theese LObs?
    Regards
    Nawa

Maybe you are looking for

  • MS office components

    We have numerous installs of MS Office Standard yet when running compliance reports, the individual components are being added to thye count, as well as the suiute itself. How can we eliminate the individuals components from being added?

  • Wall of video?

    Is there a plug-in that allows one clip to display as a wall of images - similar to a wall of video monitors? I need to make multiple boxes all playing the same clip and I don't want to have lots of layers to size. Thanks!

  • Need help wih Whatsapp

    Hey there.  Right now, I m using Nokia asha 311. I have updated my whatsapp and its current version is 2.12.20 .  I am facing some problem with it. And I want to install any older version of Whatsapp. Is it possible? If yes, then how? Please help. It

  • Play In to Out?

    I checked the manual and could not find a Play In to Out function. Does X have this abilty? I use this all the time in FCP 7. I hope it's in X.

  • Frame rate Conversion Upon Import

    Hello All,      Strange things in my FCP Studio. I converted an mkv file to Prores422 frame rate 23.976 using QT 7. When I import it the Vid rate says its at 10fps???? The info tab in quicktime says the frame rate is 23.976. Then it makes me render??