CRM - Modifying block size of Adapter Object (R3AC1)

Hi,
We have differences between SAP R/3 and CRM systems. Not all the business parters are in sync between two systems. In order to bring business parter tables in sync, we are executing R3AR2/R3AR4 for a range of 10,000 BPs at a time.
My question is, we are doing this by with the block size of adapter object BUPA_MAIN to 1.  This is resulting into creation of 1 queue per business parter. Is there any harm with keeping the block size to 1 ?
We attempted to do it with block size 100 but for some reason, it's not working fine. With block size 100, not all the BPs from input range are getting selected for replication.
Thanks,
Amol

Hi Amol - For the MATERIAL ADAPTER we are using MARA-MTART criteria, but also want to apply MARC-WERKS.
In order to stop the entire MATERIAL record from being downloaded, we used BTE OPEN_FI_PERFORM_CRM0_200_P and wrote a function module to interrogate the MARC table for the MATNR.
If the MATNR was not extended to a specific PLANT, then the record was not downloaded to CRM.
However, I discovered that with the MATERIAL BLKSIZE = 100, there were some records that did not meet the criteria slipping through to CRM.
So I made the BLKSIZE = 1, and the correct filtering is occurring!  I'm not sure why, but I suspect there is something in the CRS_SEND_TO_SERVER function module in ECC, that is not looping properly.  And it works just fine when there is just one record at a time.

Similar Messages

  • Multiple block size

    Hi
    When I use multiple block size tablespaces.(32K)
    I have to set DB_32K_CACHE_SIZE parameter.
    Assume the size of the buffer cache is 500m.
    If I set DB_32K_CACHE_SIZE to 200M.
    Will there be only 300m available in buffer cache? how is the allocation works?

    The vast majority of databases do not need to deploy multiple blocksizes.
    As noted here it is not for all databases only specific: http://www.dba-oracle.com/t_multiple_blocksizes_summary.htm
    I have some doubts in this issue...When in doubt consult the official documentation and metalink.
    Here is IBM's Oracle documentation on multiple blocksizes: http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP100883
    While most customers only use the default database block size, it is possible to use up to 5 different database block sizes for different objects within the same database.
    Having multiple database block sizes adds administrative complexity and (if poorly designed and implemented) can have adverse performance consequences. Therefore, using multiple block sizes should only be done after careful planning and performance evaluation.

  • Block size in middleware adapter object

    Hi
    Am trying to download customers from ECC to CRM.
    If the block size is set to 1 in CUSTOMER_MAIN adapter object, and if i download a request (R3AR4) with a range of 20,000 (say 1,00,000 to 1,20,000) this request gets downloaded fine. The same happens for block size 3.
    But if i try with block size 100 and increase the range to 20,000+ , the CRM system comes down. It gets into an infinite loop i think. Then i require to restart the CRM system.
    Has anyone come across such a situation or someone can suggest whats going wrong?
    Thanks for your help in anticipation.
    Karthik

    Hi,
    Did you try with the block size of 1000? This is the standard value. Give a try
    <b>Do not forget to reward if it helps,</b>
    Thanks,
    Paul Kondaveeti

  • Can I copy MATERIAL adapter object and create new business object in R3AC1

    Hi experts,
    I need to copy MATERIAL adapter object and create a new business object R3AC1.
    Please let me know whether this is feasible.
    Thanks & Regards,
    Keya

    Usually  when there is a need to change/create an adapter object, I suggest to change object class to CUSTOMIZING in R3AC1. Then save it and call transaction R3AC3. There do necessary changes and afterwards switch back to original object class. There is no copy functionality for adapter objects. Kindly use transaction R3AC3 to create a new adapter object and maintain all tabs according to your requirements. When you are finished, kindly change the object class "CUSTOMIZING" to the original class name you're copying and the new adapter object will be visible in R3AC1.
    Documentation can be found in
       https://service.sap.com/ce    -> Early Product Training         > SAP CRM 4.0 & mySAP CRM Edition 2004 .. on the learning map, please choose "Development Consultant" -> Open "Enhance CRM Scenarios"
    Here you can find a lot of guides and SAP tutors on how to create an adapter object, example coding, and so on.
    To enable Filter Fields for an Adapter object make requisite changes in table SMOFFILFLD and generate the Adapter Object using SMOGGEN. CHeck table CRMPAROLTP in ECC to enable Filtering

  • How can i modify the cache block size to 64k in the 3510 array

    which option should be modify if i wanna setup cache block size to 64k in 3510 array ?
    because my stirpe size was set to 64k,so i wanna setup my cache block size to 64k.
    the document had refered to that it can be modify ,but not instruction how
    and i check all the option about the cache in 3510,and didn't find it... :(
    could anybody tell me how to do it?
    Thx!!

    could any body help me ?

  • How to get the connected Adapter Object for a particular BDoc Type?

    Hi All,
    I have a scenario in which CRM system is connected to ERP system.
    In SMW01 transaction, I can see one BDoc with BUS_TRANS_MSG as the BDoc Type in CRM.
    Now, how do I get to know if this one BDoc is a SALESDOCUMENT or SALESCONTRACT.
    Is there any way thorough which I can get to know the Adapter Object for this particular BDoc.
    Regards,
    Madhuri

    Hi Madhuri,
               Happy new year.
    In Transaction : R3AC1.
    You can observe the Linked BDOC for the adaptor objects. For example Sales docuemnt and Sales contrcat will have the same linked BDOC as "BUS_TRANS_MSG".
    If you see an error in SMW01, you want to find whether it is salesdocument or contract.
    1. Please take the Queue name from SMW01.
    from the queue name you can find whether it is sales contract or sales document.
    Queue name is customized in tables: SMOFQFIND.
    I hope this helps you.
    regards,
    Sri...

  • Adapter Object Marketing Planner

    Hi Forum,
    I have an issue with the Marketing planner transfer to R/3 Project systems.
    Is there a way to find the Table details of the source and target for an adapter object crm_mktpl_r3int.
    I have created the site types, subscription objects and all the customization for integration  in both CRM and R3 as well as PS also. But now when I run the initial load it says no table found for data exchange
    When I go to r3ac1 and check the adapter object for marketing planner there are no tables defined for the source and destination. I am confused because in R3AC3 there are too many tables for different objects of marketing planner and campaigns.
    Is there a way to find the Table details of the source and target for an adapter object crm_mktpl_r3int.
    Are the tables cgpl_project (source) and PRPS (target) correct
    Regards,
    Amit

    Hi Raj,
    In my opinion these are intermediate table since I checked them and they are still empty. I had already released 4 campaigns so if these were the initial tables to start with then all those 4 campaigns should have been here.
    The  direction of CRM_MKTPL_R3MAP table is from crm to r3 so it is source table and it shud have those 4 campaigns.
    The  direction of  cgpl_r3_attribut is R/3 to crm so it is the target destination. so it will receive at the r3 end.
    On the other hand cgpl_project has all the entries so I have used it as a source and PRPS as target But still the queue is in wait status.
    If you have this scenario configured plz check in r3ac1.
    Regards,
    Amit

  • How to install 10g database on windows with db block size 16k

    Hi,
    Can somone help me install oracle 10g database on windows xp with db block size as 16k.
    i need this database, because it is one of the recommendations for insalling OWB(Oracle Warehouse Builder).
    Thanks,
    Philip.

    1)     In Initialization parameter pile
    DB_BLOCK_SIZE=8192/16K - One way
    2) The other way once you install Oracle 10 G then at the time when you are creating database with Database Configuration Assistant – you can modify the block size by the[b] Initialization parameter screen.
    -     Sizing Tab
    Block Size 8K/16K
    *** Remember by default Oracle 10 G uses 8K block size.
    You use the options on the Sizing tab to configure the block size of your database
    and the number of processes that can connect to this database. The Block Size setting corresponds to the smallest unit of storage within the Oracle database. All storage of database objects (tables, indexes, and so on) are governed by the block size. The block size defaults to 8KB, but you can modify it. Once the database is created, you cannot modify this setting.
    The maximum and minimum size of an Oracle block depends on the operating system. Generally, 8KB is sufficient for most transaction-oriented applications, and larger block sizes such as 16KB and higher are used in data warehouse–type applications.
    The Processes setting specifies the maximum number of simultaneous operating system processes that can be connected to this Oracle database. You must include at least six processes for each of the Oracle background processes. You can increase this number on the Initialization parameter screen.

  • Adapter Object

    Hi Experts,
    I am new to CRM Middleware. Please help me in knowing the difference between customizing adapter object and Business adapter object?

    Hello,
    Customizing object are meant to get the customizing data from R/3 to CRM.
    You can check the customizing objects in txn:R3AC3.
    Business objects are meant to get the actual data(master data as well as transaction data) from R/3 to CRM.
    You can check them in txn:R3AC1.
    For eg:
    You want to download materials from R/3 to CRM.
    But in order to get the materials to CRM first, you need to make sure that necessary customizng has been maintained(or downloaded from R/3) like material numbering scheme, numbering settings and hierarchy data etc.
    Once this is downloaded then only materials can be downloaded from R/3 to CRM using the Business Object which is
    MATERIAL in this case.
    If you check the Business Object in txn:R3AC1, then you can see under Parent Object tab that customizing objects are maintained.Which means that customizing object has to be downloaded before performing business object download.
    Hope this clarifies your doubt.
    Best Regards,
    Shanthala Kudva.

  • Middleware and Adapter Object help needed

    Hi all,
    We are trying to replicate business agreements from CRM to contract accounts in our R/3 IS-U system. We've followed the various cookbooks and guidelines but have so far been unsuccessful. For object BUAG_MAIN, is an adapter object needed? If so, we are unable to find one in R3AC1.
    If we check the BDoc message, we are getting two Technical errors after creating the business agreement:
    1. "Outbound call for BDoc type BUAG_MAIN to adapter module CRM_UPLOAD_TO_OLTP failed."
    2. "Service that caused the error: SMW3_OUTBOUNDADP_CALLADAPTERS"
    Any ideas? Points will be awarded for helpful responses. Thanks in advance.
    Message was edited by:
            John S

    So in R3AC1 there is a button to show inactive adapter objects. Hitting this showed BUAG_MAIN. After that, we opened BUAG_MAIN and activated the object. This resulted in a couple of changes that we clicked through. We then were able to do an initial load of the business agreements and replications of newly created ones happened thereafter.

  • ORA-00349: failure obtaining block size for '+Z'  in Oracle XE

    Hello,
    I am attempting to move the online redo log files to a new flash recovery area location created on network drive "Z" ( Oracle Database 10g Express Edition Release 10.2.0.1.0).
    When I run @?/sqlplus/admin/movelogs; in SQL*Plus as a local sysdba, I get the following errors:
    ERROR at line 1:
    ORA-00349: failure obtaining block size for '+Z'
    ORA-06512: at line 14
    Please let me know how to go about resolving this issue.
    Thank you.
    See below for detail:
    Connected.
    SQL> @?/sqlplus/admin/movelogs;
    SQL> Rem
    SQL> Rem $Header: movelogs.sql 19-jan-2006.00:23:11 banand Exp $
    SQL> Rem
    SQL> Rem movelogs.sql
    SQL> Rem
    SQL> Rem Copyright (c) 2006, Oracle. All rights reserved.
    SQL> Rem
    SQL> Rem NAME
    SQL> Rem movelogs.sql - move online logs to new Flash Recovery Area
    SQL> Rem
    SQL> Rem DESCRIPTION
    SQL> Rem This script can be used to move online logs from old online
    log
    SQL> Rem location to Flash Recovery Area. It assumes that the database
    SQL> Rem instance is started with new Flash Recovery Area location.
    SQL> Rem
    SQL> Rem NOTES
    SQL> Rem For use to rename online logs after moving Flash Recovery
    Area.
    SQL> Rem The script can be executed using following command
    SQL> Rem sqlplus '/ as sysdba' @movelogs.sql
    SQL> Rem
    SQL> Rem MODIFIED (MM/DD/YY)
    SQL> Rem banand 01/19/06 - Created
    SQL> Rem
    SQL>
    SQL> SET ECHO ON
    SQL> SET FEEDBACK 1
    SQL> SET NUMWIDTH 10
    SQL> SET LINESIZE 80
    SQL> SET TRIMSPOOL ON
    SQL> SET TAB OFF
    SQL> SET PAGESIZE 100
    SQL> declare
    2 cursor rlc is
    3 select group# grp, thread# thr, bytes/1024 bytes_k
    4 from v$log
    5 order by 1;
    6 stmt varchar2(2048);
    7 swtstmt varchar2(1024) := 'alter system switch logfile';
    8 ckpstmt varchar2(1024) := 'alter system checkpoint global';
    9 begin
    10 for rlcRec in rlc loop
    11 stmt := 'alter database add logfile thread ' ||
    12 rlcRec.thr || ' size ' ||
    13 rlcRec.bytes_k || 'K';
    14 execute immediate stmt;
    15 begin
    16 stmt := 'alter database drop logfile group ' || rlcRec.grp;
    17 execute immediate stmt;
    18 exception
    19 when others then
    20 execute immediate swtstmt;
    21 execute immediate ckpstmt;
    22 execute immediate stmt;
    23 end;
    24 execute immediate swtstmt;
    25 end loop;
    26 end;
    27 /
    declare
    ERROR at line 1:
    ORA-00349: failure obtaining block size for '+Z'
    ORA-06512: at line 14
    Can someone point me in the right direction as to what I may be doing wrong here - Thank you!

    888442 wrote:
    I am trying to drop and recreate ONLINE redo logs on my STANDB DATABASE (11.1.0.7)., but i am getting the below error.
    On primary, we have done the changes., ie we added new logfile with bigger size and 3 members. When trying to do the same on Standby we are getting this error.
    Our database is in Active DG Read only mode and the oracle version is 11.1.0.7.
    I have deffered the log apply and cancelled the managed recovery, and dg is in manual mode.
    SQL> alter database Add LOGFILE GROUP 4 ('+DT_DG1','+DT_DG2','+DT_DG3') SIZE 1024M;
    alter database Add LOGFILE GROUP 4 ('+DT_DG1','+DT_DG2','+DT_DG3') SIZE 1024M
    ERROR at line 1:
    ORA-00349: failure obtaining block size for '+DT_DG1'First why you are dropping & recreating online redo log files on standby.
    On standby only standby redo log files will be used. Not sure what you are trying to do.
    here is example how to create online redo log files, Check that diskgroup is mounted and have sufficient space to create.
    sys@ORCL> select member from v$logfile;
    MEMBER
    C:\ORACLE\ORADATA\ORCL\REDO03.LOG
    C:\ORACLE\ORADATA\ORCL\REDO02.LOG
    C:\ORACLE\ORADATA\ORCL\REDO01.LOG
    sys@ORCL> alter database add logfile group 4 (
      2     'C:\ORACLE\ORADATA\ORCL\redo_g01a.log',
      3     'C:\ORACLE\ORADATA\ORCL\redo_g01b.log',
      4     'C:\ORACLE\ORADATA\ORCL\redo_g01c.log') size 10m;
    Database altered.
    sys@ORCL> select member from v$logfile;
    MEMBER
    C:\ORACLE\ORADATA\ORCL\REDO03.LOG
    C:\ORACLE\ORADATA\ORCL\REDO02.LOG
    C:\ORACLE\ORADATA\ORCL\REDO01.LOG
    C:\ORACLE\ORADATA\ORCL\REDO_G01A.LOG
    C:\ORACLE\ORADATA\ORCL\REDO_G01B.LOG
    C:\ORACLE\ORADATA\ORCL\REDO_G01C.LOG
    6 rows selected.
    sys@ORCL>
    Your profile:-
    888442      
         Newbie
    Handle:      888442
    Status Level:      Newbie
    Registered:      Sep 29, 2011
    Total Posts:      12
    Total Questions:      8 (7 unresolved)
    Close the threads if answered, Keep the forum clean.

  • How to debug mapping module of a Adapter Object

    I have created an Adapter object which will send data from ECC table to CRM. In the Table/Structure tab, I have mentioned the ECC table name for which data will come.
    A FM is created to convert data and is mapped in Mapping Module of the object.Now though initial load is running sucessfully, table entry is not inserted in CRM table.
    Though I have given breakpoint in the FM, it is not triggering. Is there any way out to debug the FM.

    Hi,
    Can you let me know the download status of your adapter object using txn. R3AM1? If it is DONE, you can trigger another initial load.
    The inbound queue formation on CRM is governed by USE_IN_QUEUE field of table CRMRFCPAR on the ECC for your adapter object. It should be set to X. So in case there's a specific entry for your adapter object in CRMRFCPAR, just check it.
    If this setting is already maintained you should see a queue name like R3AI_<objectName> in SMQ2 of CRM. Of course you'd need to deregister the R3AI* queue in txn. SMQR.
    Thanks,
    Rohit

  • Cannot set Block Size

    Hi all,
    <br><br>
    When I tried to create a database 11g using DBCA, I noticed the tool did not allow me to select the block size. It was automtically set to 8KB without being able to modify it.
    <br><br>
    Is there any reason for that?
    <br><br>
    os: windows xp sp2

    When you create a new database you have two options when using the dbca, create it by using a seed database (by means of a cloning procedure) or create a completely new database (customized). If you choose the first option, then you must adopt the physical structure from the seed database, which by default is 8k and there is no way to change it. If you create a completely new database you can modify the block size since there is no physical structure yet.
    ~ Madrid

  • Raid 0 (Stripe) for OS X boot disk? Best Performance and block size

    Hi,
    so this is a new thread to an older question I had and would like some feedback on;
    I have a new Mac Pro with 4 matched 1TB caviar black drives. I WILL be doing Full Time-Machine Backups, as well as an independant full-system backup regularly.
    That being said, I have 4 drives open and am looking for suggestions. I am leaning toward 2 sets or stripes (one for the OS and one for 'work space', the former with a 32k stripe block size, the latter with 64k (will hold video, audio, scratch, and, yes, Games).
    Does this sound alright? Is there an issue with Striping the boot drive? Is the block size or 32 (or 64) optimal?
    Thanks!
    Dan

    Hi D# Shooter, regarding your question,
    D3 Shooter wrote:
    You brought to mind something I did not take into consideration, Time Machine. I really like the simplicity of TM as it saved me once before. So, could you tell me, for photo files, some video, how much does the striping (% wise) improve the accessing and filing of such files compared to no striping but, using internal drives (7200/WD/1TB/Caviar)? I have not done striping before and want to weigh in because of the back up storage issues now. Thanks.
    J_ust give it a try and see if it is worth it for you_.
    Striping:
    • just enhances (reduces the access/transfer) because in practice the access is distributed in parallel across several DDM's (Old school but it works great!). I think for video and file work the advantage is that you can access the whole object sooner (rather than faster).
    • this distribuition also reduces a load of old style queing on the device ove rthe path. THis was resolved in the late 1980's so no reall rocket science here.
    the issues with striping are few and basically over all the raid implementations (except JBOD which of course is not raid) when compared to a single spindle. The discussions are enormous and plentiful via google and experiences and opinions vary widely.
    Fir the I.T. peole its the advantage they get for access using a smart disk controller that caches goosies like indexes and stuff so that they can sustain a zillion trivial transactions/sec (i.e. banking & internet stuff).. stuff that is of no interest to me
    For the creative people and many applications that are BLOB's (like video, film and remote sensing objects) getting use of the objects sooner (not faster) is of prime importance for workflow efficiencies. If you have this need then striping stiff across disks is for you!
    TIMEMACHINE.app works fine as it seems fairly agnostic to whats implemented under the disk file system. MY issue with time machine is that I don't want it looking after my production stuff, only to keep an eye on my admin I.T. type stuff such as ~/ and data data files.
    As posted on ths thread:
    • availability is the major concern with any file system (cloud or raid or other). RAID with parity schemes and double parity schemes (Raid1,,3,5,6) and implementations such as RAID6+ LSF (log structured file) are all wonderful for this business workflows that need it.
    • timely access in a workflow is another
    • cost benefits are another
    However a *great benefit* for me of *consolidating small storage components under one huge file system is that you dont have to COPY any thing around*. THis is marvelous especially when you think you have to move 2TB's of stuff from one place to anther. THis a takes a lot of time with elcheapo didks that dont have fast interfaces such as SATA/SAS of FC for example.
    As always and has been addressed by others on this thread (Hatter) if you lose a component storage device the whole file system is hosed or severely degraded unless you spend a lot of money on full ranks of DDMs with hot spares and a very good RAID controller card. Again its money.
    YEah sure you can carry some PARITY RAID implementation around across 3 didks but the storage capacity usage is dreadful. THis is why more complex RAID implemntatiosn are in groups of 10+ dDMs.. (yep poepl can argue.. but this is the mainstream).
    My external disk arrays are merely two LUNs (SAS DOMAINSA) that have two file systems implemented using 2 x 4TB 1TBs DDMS - all RAID0 - no parity (no availability) - I just want speed. I look after my own "availability" withm= my archive solution. If the operation dies, I stat again. I'm happy wi that. RAID 5 has write penalty performace hits (well known +update in place+), , RAID 6+ is lousy for huge objects but good for I.T. but ok if you lose two disks in a stripe (RANK).
    They all have their flaws... and mirroring a RAID0 (RAID1/0) seems to be popular with storage vendors because they can see you more disk and thats proper business workflow depends on it.
    However you can achieve this stuff if you change your workflow slightly.
    Other than these the rest is tech specs and stuff under the cover.
    So you what is right for you and your business.
    I dont like spending money on nasty elcheapo FW800 LeCIE disk enclosures with the their junky components and their ilk having been done badly on several corrupted devices and lsing TB;s of content - this is why I invested in a high speed LTO4 ULtrium data tape archive solution.
    sorry for long post..
    w

  • Transaction execution time and block size

    Hi,
    I have Oracle Database 11g R2 64 bit database on Oracle Linux 5.6. My system has ONE hard drive.
    Recently I experimented with 8.5 GB database in TPC-E test. I was watching transaction time for 2K,4K,8K Oracle block size. Each time I started new test on different block size, I would created new database from scratch to avoid messing something up (each time SGA and PGA parameters ware identical).
    In all experiments a gave to my own tablespace (NEWTS) different configuration because of oracle block-datafile size limits :
    2K oracle block database had 3 datafiles, each 7GB.
    4K oracle block database had 2 datafiles, each 10GB.
    8K oracle block database had 1 datafile of 20GB.
    Now best transaction (tranasaction execution) time was on 8K block, little longer tranasaction time had 4K block, but 2K oracle block had definitly worst transaction time.
    I identified SQL query(when using 2K and 4K block) that was creating hot segments on E_TRANSACTION table, that is largest table in database (2.9GB), and was slowly executed (number of executions was low compared to 8K numbers).
    Now here is my question. Is it possible that multiple datafiles are reasone for this low transaction times. I have AWR reports from that period, but as someone who is still learning things about DBA, I would like to asq, how could I identify this multi-datafile problem (if that is THE problem), by looking inside AWR statistics.
    THX to all.

    >
    It's always interesting to see the results of serious attempts to quantify the effects of variation in block sizes, but it's hard to do proper tests and eliminate side effects.
    I have Oracle Database 11g R2 64 bit database on Oracle Linux 5.6. My system has ONE hard drive.A single drive does make it a little too easy for apparently random variation in performance.
    Recently I experimented with 8.5 GB database in TPC-E test. I was watching transaction time for 2K,4K,8K Oracle block size. Each time I started new test on different block size, I would created new database from scratch to avoid messing something up Did you do anything to ensure that the physical location of the data files was a very close match across databases - inner tracks vs. outer tracks could make a difference.
    (each time SGA and PGA parameters ware identical).Can you give us the list of parameters you set ? As you change the block size, identical parameters DON'T necessarily result in the same configuration. Typically a large change in response time turns out to be due to changes in execution plan, and this can often be associated with different configuration. Did you also check that the system statistics were appropriately matched (which doesn't mean identical cross all databases).
    In all experiments a gave to my own tablespace (NEWTS) different configuration because of oracle block-datafile size limits :
    2K oracle block database had 3 datafiles, each 7GB.
    4K oracle block database had 2 datafiles, each 10GB.
    8K oracle block database had 1 datafile of 20GB.If you use bigfile tablespaces I think you can get 8TB in a single file for a tablespace.
    Now best transaction (tranasaction execution) time was on 8K block, little longer tranasaction time had 4K block, but 2K oracle block had definitly worst transaction time.We need some values here, not just "best/worst" - it doesn't even begin to get interesting unless you have at least a 5% variation - and then it has to be consistent and reproducible.
    I identified SQL query(when using 2K and 4K block) that was creating hot segments on E_TRANSACTION table, that is largest table in database (2.9GB), and was slowly executed (number of executions was low compared to 8K numbers).Query, or DML ? What do you mean by "hot" ? Is E_TRANSACTION a partitioned table - if not then it consists of one segment, so did you mean to say "blocks" rather than segments ? If blocks, which class of blocks ?
    Now here is my question. Is it possible that multiple datafiles are reasone for this low transaction times. I have AWR reports from that period, but as someone who is still learning things about DBA, I would like to asq, how could I identify this multi-datafile problem (if that is THE problem), by looking inside AWR statistics.On a single disc drive I could probably set something up that ensured you got different performance because of different numbers of files per tablespace. As SB has pointed out there are some aspects of extent allocation that could have an effect - roughly speaking, extents for a single object go round-robin on the files so if you have small extent sizes for a large object then a tablescan is more likely to result in larger (slower) head movements if the tablespace is made from multiple files.
    If the results are reproducible, then enable extended tracking (dbms_monitor, with waits) and show us what the tkprof summaries for the slow transactions look like. That may give us some clues.
    Regards
    Jonathan Lewis

Maybe you are looking for

  • Format changing after running the concurrent program for indentation

    Hi I have got an issue regarding indentation in rtf template. Actually in the template I have indentation or numbering in one format like david but after running the concurrent program in the temlate output I am getting the numbering in Latin letters

  • Data Extraction from SAP R/3 Table View in BW

    Dear All, I am trying to create a BW query from SAP R/3 Table View. Till now i have created a table view Extracted Data source in R/3 Useing Tcode : RSA3 In BW Side 1. I have replicated the data source in Source system 2. Created  info object     and

  • Controlling resolution on dual monitor configurations

    We have a situation where we use a screen capture process that is having trouble when someone plugs in a dual monitor to a Macbook (Air, I think, but it applies elsewhere). The resolution on the external monitor ends up being higher than our screen c

  • Best Practice Suggestions?

    Hey CF World, I have to revamp an online order process. The process is broken into 4 steps. The app as it exists today was built by a different developer and for the life of me, I have wasted about 5 hours trying to figure out exactly what the person

  • Properties Tab and Miscellaneous Tab in Workflow

    Hi Experts, I want know how the Properties Tab of all steps works in workflow? please give some example workflows to check? i found in SAP library WS30000015 as an example. but i'm not able to find   1. how the sap.bc.bmt.wfm.process.status system se