Block size in middleware adapter object

Hi
Am trying to download customers from ECC to CRM.
If the block size is set to 1 in CUSTOMER_MAIN adapter object, and if i download a request (R3AR4) with a range of 20,000 (say 1,00,000 to 1,20,000) this request gets downloaded fine. The same happens for block size 3.
But if i try with block size 100 and increase the range to 20,000+ , the CRM system comes down. It gets into an infinite loop i think. Then i require to restart the CRM system.
Has anyone come across such a situation or someone can suggest whats going wrong?
Thanks for your help in anticipation.
Karthik

Hi,
Did you try with the block size of 1000? This is the standard value. Give a try
<b>Do not forget to reward if it helps,</b>
Thanks,
Paul Kondaveeti

Similar Messages

  • Middleware Adapter objects

    Hi experts,
                      i am working on CRM 5.0. i need to create a customized business Adapter (for replicating relationships)object to download a custom table from R/3 to CRM (Z table replication).But i am unable to create Business object in CRM.(Where as, there is an option to create a customizing object).ls there any way to do this?
    Points will be rewarded.
    Thanks in advance.
    Edited by: kiran on Dec 19, 2007 3:03 PM

    Dear RK,
    The transaction R3AC1 you can not create the adaptor object. However you can classify the adaptor object created in R3AC3 such as that it will appear in transaction R3AC1.
    As in your case you want to pull BP Relationship using some Ztable.
    Create Your Adaptor object in R3AC3 and give link Bdoc as BUPA_REL and the table and field also use mapping module.
    When you save it, you can see it in R3AC1.
    Hope this will help.
    Regards
    Ashutosh Asthana
    "Please reward with points ,if found helpful."

  • CRM - Modifying block size of Adapter Object (R3AC1)

    Hi,
    We have differences between SAP R/3 and CRM systems. Not all the business parters are in sync between two systems. In order to bring business parter tables in sync, we are executing R3AR2/R3AR4 for a range of 10,000 BPs at a time.
    My question is, we are doing this by with the block size of adapter object BUPA_MAIN to 1.  This is resulting into creation of 1 queue per business parter. Is there any harm with keeping the block size to 1 ?
    We attempted to do it with block size 100 but for some reason, it's not working fine. With block size 100, not all the BPs from input range are getting selected for replication.
    Thanks,
    Amol

    Hi Amol - For the MATERIAL ADAPTER we are using MARA-MTART criteria, but also want to apply MARC-WERKS.
    In order to stop the entire MATERIAL record from being downloaded, we used BTE OPEN_FI_PERFORM_CRM0_200_P and wrote a function module to interrogate the MARC table for the MATNR.
    If the MATNR was not extended to a specific PLANT, then the record was not downloaded to CRM.
    However, I discovered that with the MATERIAL BLKSIZE = 100, there were some records that did not meet the criteria slipping through to CRM.
    So I made the BLKSIZE = 1, and the correct filtering is occurring!  I'm not sure why, but I suspect there is something in the CRS_SEND_TO_SERVER function module in ECC, that is not looping properly.  And it works just fine when there is just one record at a time.

  • Middleware and Adapter Object help needed

    Hi all,
    We are trying to replicate business agreements from CRM to contract accounts in our R/3 IS-U system. We've followed the various cookbooks and guidelines but have so far been unsuccessful. For object BUAG_MAIN, is an adapter object needed? If so, we are unable to find one in R3AC1.
    If we check the BDoc message, we are getting two Technical errors after creating the business agreement:
    1. "Outbound call for BDoc type BUAG_MAIN to adapter module CRM_UPLOAD_TO_OLTP failed."
    2. "Service that caused the error: SMW3_OUTBOUNDADP_CALLADAPTERS"
    Any ideas? Points will be awarded for helpful responses. Thanks in advance.
    Message was edited by:
            John S

    So in R3AC1 there is a button to show inactive adapter objects. Hitting this showed BUAG_MAIN. After that, we opened BUAG_MAIN and activated the object. This resulted in a couple of changes that we clicked through. We then were able to do an initial load of the business agreements and replications of newly created ones happened thereafter.

  • Raid 0 (Stripe) for OS X boot disk? Best Performance and block size

    Hi,
    so this is a new thread to an older question I had and would like some feedback on;
    I have a new Mac Pro with 4 matched 1TB caviar black drives. I WILL be doing Full Time-Machine Backups, as well as an independant full-system backup regularly.
    That being said, I have 4 drives open and am looking for suggestions. I am leaning toward 2 sets or stripes (one for the OS and one for 'work space', the former with a 32k stripe block size, the latter with 64k (will hold video, audio, scratch, and, yes, Games).
    Does this sound alright? Is there an issue with Striping the boot drive? Is the block size or 32 (or 64) optimal?
    Thanks!
    Dan

    Hi D# Shooter, regarding your question,
    D3 Shooter wrote:
    You brought to mind something I did not take into consideration, Time Machine. I really like the simplicity of TM as it saved me once before. So, could you tell me, for photo files, some video, how much does the striping (% wise) improve the accessing and filing of such files compared to no striping but, using internal drives (7200/WD/1TB/Caviar)? I have not done striping before and want to weigh in because of the back up storage issues now. Thanks.
    J_ust give it a try and see if it is worth it for you_.
    Striping:
    • just enhances (reduces the access/transfer) because in practice the access is distributed in parallel across several DDM's (Old school but it works great!). I think for video and file work the advantage is that you can access the whole object sooner (rather than faster).
    • this distribuition also reduces a load of old style queing on the device ove rthe path. THis was resolved in the late 1980's so no reall rocket science here.
    the issues with striping are few and basically over all the raid implementations (except JBOD which of course is not raid) when compared to a single spindle. The discussions are enormous and plentiful via google and experiences and opinions vary widely.
    Fir the I.T. peole its the advantage they get for access using a smart disk controller that caches goosies like indexes and stuff so that they can sustain a zillion trivial transactions/sec (i.e. banking & internet stuff).. stuff that is of no interest to me
    For the creative people and many applications that are BLOB's (like video, film and remote sensing objects) getting use of the objects sooner (not faster) is of prime importance for workflow efficiencies. If you have this need then striping stiff across disks is for you!
    TIMEMACHINE.app works fine as it seems fairly agnostic to whats implemented under the disk file system. MY issue with time machine is that I don't want it looking after my production stuff, only to keep an eye on my admin I.T. type stuff such as ~/ and data data files.
    As posted on ths thread:
    • availability is the major concern with any file system (cloud or raid or other). RAID with parity schemes and double parity schemes (Raid1,,3,5,6) and implementations such as RAID6+ LSF (log structured file) are all wonderful for this business workflows that need it.
    • timely access in a workflow is another
    • cost benefits are another
    However a *great benefit* for me of *consolidating small storage components under one huge file system is that you dont have to COPY any thing around*. THis is marvelous especially when you think you have to move 2TB's of stuff from one place to anther. THis a takes a lot of time with elcheapo didks that dont have fast interfaces such as SATA/SAS of FC for example.
    As always and has been addressed by others on this thread (Hatter) if you lose a component storage device the whole file system is hosed or severely degraded unless you spend a lot of money on full ranks of DDMs with hot spares and a very good RAID controller card. Again its money.
    YEah sure you can carry some PARITY RAID implementation around across 3 didks but the storage capacity usage is dreadful. THis is why more complex RAID implemntatiosn are in groups of 10+ dDMs.. (yep poepl can argue.. but this is the mainstream).
    My external disk arrays are merely two LUNs (SAS DOMAINSA) that have two file systems implemented using 2 x 4TB 1TBs DDMS - all RAID0 - no parity (no availability) - I just want speed. I look after my own "availability" withm= my archive solution. If the operation dies, I stat again. I'm happy wi that. RAID 5 has write penalty performace hits (well known +update in place+), , RAID 6+ is lousy for huge objects but good for I.T. but ok if you lose two disks in a stripe (RANK).
    They all have their flaws... and mirroring a RAID0 (RAID1/0) seems to be popular with storage vendors because they can see you more disk and thats proper business workflow depends on it.
    However you can achieve this stuff if you change your workflow slightly.
    Other than these the rest is tech specs and stuff under the cover.
    So you what is right for you and your business.
    I dont like spending money on nasty elcheapo FW800 LeCIE disk enclosures with the their junky components and their ilk having been done badly on several corrupted devices and lsing TB;s of content - this is why I invested in a high speed LTO4 ULtrium data tape archive solution.
    sorry for long post..
    w

  • Transaction execution time and block size

    Hi,
    I have Oracle Database 11g R2 64 bit database on Oracle Linux 5.6. My system has ONE hard drive.
    Recently I experimented with 8.5 GB database in TPC-E test. I was watching transaction time for 2K,4K,8K Oracle block size. Each time I started new test on different block size, I would created new database from scratch to avoid messing something up (each time SGA and PGA parameters ware identical).
    In all experiments a gave to my own tablespace (NEWTS) different configuration because of oracle block-datafile size limits :
    2K oracle block database had 3 datafiles, each 7GB.
    4K oracle block database had 2 datafiles, each 10GB.
    8K oracle block database had 1 datafile of 20GB.
    Now best transaction (tranasaction execution) time was on 8K block, little longer tranasaction time had 4K block, but 2K oracle block had definitly worst transaction time.
    I identified SQL query(when using 2K and 4K block) that was creating hot segments on E_TRANSACTION table, that is largest table in database (2.9GB), and was slowly executed (number of executions was low compared to 8K numbers).
    Now here is my question. Is it possible that multiple datafiles are reasone for this low transaction times. I have AWR reports from that period, but as someone who is still learning things about DBA, I would like to asq, how could I identify this multi-datafile problem (if that is THE problem), by looking inside AWR statistics.
    THX to all.

    >
    It's always interesting to see the results of serious attempts to quantify the effects of variation in block sizes, but it's hard to do proper tests and eliminate side effects.
    I have Oracle Database 11g R2 64 bit database on Oracle Linux 5.6. My system has ONE hard drive.A single drive does make it a little too easy for apparently random variation in performance.
    Recently I experimented with 8.5 GB database in TPC-E test. I was watching transaction time for 2K,4K,8K Oracle block size. Each time I started new test on different block size, I would created new database from scratch to avoid messing something up Did you do anything to ensure that the physical location of the data files was a very close match across databases - inner tracks vs. outer tracks could make a difference.
    (each time SGA and PGA parameters ware identical).Can you give us the list of parameters you set ? As you change the block size, identical parameters DON'T necessarily result in the same configuration. Typically a large change in response time turns out to be due to changes in execution plan, and this can often be associated with different configuration. Did you also check that the system statistics were appropriately matched (which doesn't mean identical cross all databases).
    In all experiments a gave to my own tablespace (NEWTS) different configuration because of oracle block-datafile size limits :
    2K oracle block database had 3 datafiles, each 7GB.
    4K oracle block database had 2 datafiles, each 10GB.
    8K oracle block database had 1 datafile of 20GB.If you use bigfile tablespaces I think you can get 8TB in a single file for a tablespace.
    Now best transaction (tranasaction execution) time was on 8K block, little longer tranasaction time had 4K block, but 2K oracle block had definitly worst transaction time.We need some values here, not just "best/worst" - it doesn't even begin to get interesting unless you have at least a 5% variation - and then it has to be consistent and reproducible.
    I identified SQL query(when using 2K and 4K block) that was creating hot segments on E_TRANSACTION table, that is largest table in database (2.9GB), and was slowly executed (number of executions was low compared to 8K numbers).Query, or DML ? What do you mean by "hot" ? Is E_TRANSACTION a partitioned table - if not then it consists of one segment, so did you mean to say "blocks" rather than segments ? If blocks, which class of blocks ?
    Now here is my question. Is it possible that multiple datafiles are reasone for this low transaction times. I have AWR reports from that period, but as someone who is still learning things about DBA, I would like to asq, how could I identify this multi-datafile problem (if that is THE problem), by looking inside AWR statistics.On a single disc drive I could probably set something up that ensured you got different performance because of different numbers of files per tablespace. As SB has pointed out there are some aspects of extent allocation that could have an effect - roughly speaking, extents for a single object go round-robin on the files so if you have small extent sizes for a large object then a tablescan is more likely to result in larger (slower) head movements if the tablespace is made from multiple files.
    If the results are reproducible, then enable extended tracking (dbms_monitor, with waits) and show us what the tkprof summaries for the slow transactions look like. That may give us some clues.
    Regards
    Jonathan Lewis

  • Specifying segments and block size manaually

    Hi, just a quick question,
    But could anyone help me understand why someone may manually add segments to a table space (or is it a data file they would be added to) ? does auto extend not take care of this?
    And secondly ... why would you increase or decrease the block size of a segment?... is this because you may have small or large sized rows within a table and want a block size to acompany this?
    Any help would be appriciated

    Hi,
    In Oracle free space can be managed automatically or manually,You specify automatic segment-space management when you create a locally managed tablespace
    Free space can be managed automatically inside database segments. The in-segment free/used space is tracked using bitmaps, as opposed to free lists. Automatic segment-space management offers the following benefits:
    -Ease of use
    -Better space utilization, especially for the objects with highly varying size rows
    -Better run-time adjustment to variations in concurrent access
    -Better multi-instance behavior in terms of performance/space utilization
    For manually managed tablespaces, two space management parameters, PCTFREE and PCTUSED, enable you to control the use of free space for inserts and updates to the rows in all the data blocks of a particular segment. Specify these parameters when you create or alter a table or cluster (which has its own data segment). You can also specify the storage parameter PCTFREE when creating or altering an index (which has its own index segment).
    see this link
    http://download.oracle.com/docs/cd/B10500_01/server.920/a96524/b_deprec.htm#634923 :)

  • Can tablespace block size different from the database block size

    I have a 10.2.0.3 database in Unix system.
    I created a database used default block size of 8k. However, the client application requires 16k block size database. Can I work around to create a tablespace that has 16k block size instead of drop the database to recreate the database.
    Thanks a lot!

    As Steven pointed out, you certainly can.
    I would generally question, though, whether you should.
    - Why does the application require 16k block sizes? If this is a custom application, it almost certainly doesn't really require 16k blocks. If this is a packaged application, it probably doesn't really require 16k blocks. If 16k blocks are a requirement for support, I would wager that having the application's objects in 16k block size tablespaces in a database with 8k blocks would not be supported.
    - Mixing block sizes increases the management complexity of your database, potentially substantially. You need to specify a completely separate buffer cache for the 16k blocks, a buffer cache that would not be integrated with Oracle's automatic SGA management functionality. Figuring out how to split up the buffer cache between 8k and 16k blocks tends to be rather hard (particularly if the mix changes over time), which means that DBAs are going to be spending substantially more time managing the SGA in this sort of system than in a vanilla 10.2.0.3 system. And that DBAs will have many more opportunities to set things up incorrectly.
    Justin

  • Need to understand the relation B/w Bigger block size like 16k or 32k

    Hi
    How can we determine which block size is good for the data base specially for reporting DB on which real time replication is being performed.
    I will really appreciate if some one could help me in identifying this or are there any ways to find out the correct DB block size by queering any db objects.

    I think that part of the decision will have to be based on knowledge of your application and how it interacts with the database. If you database does a lot of single reads/updates of records then the smaller block size will probably be more appropriate.
    However if your application does bulk reads, insertions then you might benefit for larger block sizes.
    There are several Thread already on this subject and references as well:
    size of db_block_size
    how to decide size of db_block_size of a block.
    Re: DB_BLOCK_SIZE
    do a search in the search field on the main thread page for more.
    Regards
    tim

  • How to install 10g database on windows with db block size 16k

    Hi,
    Can somone help me install oracle 10g database on windows xp with db block size as 16k.
    i need this database, because it is one of the recommendations for insalling OWB(Oracle Warehouse Builder).
    Thanks,
    Philip.

    1)     In Initialization parameter pile
    DB_BLOCK_SIZE=8192/16K - One way
    2) The other way once you install Oracle 10 G then at the time when you are creating database with Database Configuration Assistant – you can modify the block size by the[b] Initialization parameter screen.
    -     Sizing Tab
    Block Size 8K/16K
    *** Remember by default Oracle 10 G uses 8K block size.
    You use the options on the Sizing tab to configure the block size of your database
    and the number of processes that can connect to this database. The Block Size setting corresponds to the smallest unit of storage within the Oracle database. All storage of database objects (tables, indexes, and so on) are governed by the block size. The block size defaults to 8KB, but you can modify it. Once the database is created, you cannot modify this setting.
    The maximum and minimum size of an Oracle block depends on the operating system. Generally, 8KB is sufficient for most transaction-oriented applications, and larger block sizes such as 16KB and higher are used in data warehouse–type applications.
    The Processes setting specifies the maximum number of simultaneous operating system processes that can be connected to this Oracle database. You must include at least six processes for each of the Oracle background processes. You can increase this number on the Initialization parameter screen.

  • Hashed Adapter Object

    In designing my code I am trying to avoid multiple if else/switch statements. I have been looking at differing design patterns, and I have come across the Hashed Adapter Object. The key can be used to lookup the correct value object without having to do multiple if else or case statements. I am curious if this approach can have drawbacks if the hashtable gets significantly large. For example, will the instantiation of the hashtable require too much time or memory when the size is greater than a 1000? Is there a better design pattern to use to avoid the if else/switch statements when doing a lookup?

    hello_world wrote:
    In designing my code I am trying to avoid multiple if else/switch statements. I have been looking at differing design patterns, and I have come across the Hashed Adapter Object. The key can be used to lookup the correct value object without having to do multiple if else or case statements. I am curious if this approach can have drawbacks if the hashtable gets significantly large. For example, will the instantiation of the hashtable require too much time or memory when the size is greater than a 1000? Is there a better design pattern to use to avoid the if else/switch statements when doing a lookup?If all you want to do is store values in a hashtable, there's no pattern there, it's just called "using a hashtable".
    If you are talking about putting objects that implement a common interface as values in the hashtable and looking them up based on a key, that's basically a simple form of the Strategy pattern.
    Putting an object in a hashtable merely means putting a reference to an existing object in the table. A handful of bytes for each mapping. 1000 mappings is nothing. If you are wrapping primitives and placing them in the hashtable, that will consume more memory (mitigated in newer versions of the JDK if you use valueOf()) but it's still minuscule compared to the commonly available memory for Java applications. 1000 isn't much even in memory constrained environments. Hashtables are also very fast. Adding lots of elements will not (in general) make access significantly slower.
    Lastly, make sure you use HashMap and not Hashtable.

  • Adapter Object

    Hi Experts,
    I am new to CRM Middleware. Please help me in knowing the difference between customizing adapter object and Business adapter object?

    Hello,
    Customizing object are meant to get the customizing data from R/3 to CRM.
    You can check the customizing objects in txn:R3AC3.
    Business objects are meant to get the actual data(master data as well as transaction data) from R/3 to CRM.
    You can check them in txn:R3AC1.
    For eg:
    You want to download materials from R/3 to CRM.
    But in order to get the materials to CRM first, you need to make sure that necessary customizng has been maintained(or downloaded from R/3) like material numbering scheme, numbering settings and hierarchy data etc.
    Once this is downloaded then only materials can be downloaded from R/3 to CRM using the Business Object which is
    MATERIAL in this case.
    If you check the Business Object in txn:R3AC1, then you can see under Parent Object tab that customizing objects are maintained.Which means that customizing object has to be downloaded before performing business object download.
    Hope this clarifies your doubt.
    Best Regards,
    Shanthala Kudva.

  • How to estimate the size of the database object, before creating it?

    A typical question arise in mind.
    As DBA's, we all known to determine the objects size from the database.
    But before creating an object(s) in database or in a schema, we'll do an analysis on object(s) size in relation with data growth. i.e. the size we are estimating for the object(s) we are about to create.
    for example,
    Create table Test1 (Id Number, Name Varchar2(25), Gender Char(2), DOB Date, Salary Number(7));
    A table is created.
    Now what is the maximum size of a record for this table. i.e. maximum row length for one record. And How we are estimating this.
    Please help me on this...

    To estimate a table size before you create it you can do the following.  For each variable character column try to figure out the average size of the data.  Say on average the name will be 20 out of the allowed 25 characters.  Add 7 for each date column.  For numbers
    p = number of digits in value
    s = 0 for positive number and 1 for a negative number
    round((( length((p) + s) / 2)) + 1
    Now add one byte for the null indicator for each colmn plus 3 bytes for the row header.  This is your row length.
    Multiply by the expected number of rows to get a rough size which you need to adjust for the pctfree factor that will be used in each block and block overhead.  With an 8K Oracle block size if you the default pctfree of 10 then you lose 892 bytes of storage.  So 8192 - 819 - 108 estimated overhead = 7265 usable.  Now divide 7265 by the average row length to get an estimate of the number of rows that will fit in this space and reduce the number of usable bytes by 2 bytes for each row.  This is your new usable space.
    So number of rows X estimate row length divided by usable block space in blocks.  Convert to Megabytes or Gigabytes as desired.
    HTH -- Mark D Powell --
    ed add "number of usable"

  • Multiple block size

    Hi
    When I use multiple block size tablespaces.(32K)
    I have to set DB_32K_CACHE_SIZE parameter.
    Assume the size of the buffer cache is 500m.
    If I set DB_32K_CACHE_SIZE to 200M.
    Will there be only 300m available in buffer cache? how is the allocation works?

    The vast majority of databases do not need to deploy multiple blocksizes.
    As noted here it is not for all databases only specific: http://www.dba-oracle.com/t_multiple_blocksizes_summary.htm
    I have some doubts in this issue...When in doubt consult the official documentation and metalink.
    Here is IBM's Oracle documentation on multiple blocksizes: http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP100883
    While most customers only use the default database block size, it is possible to use up to 5 different database block sizes for different objects within the same database.
    Having multiple database block sizes adds administrative complexity and (if poorly designed and implemented) can have adverse performance consequences. Therefore, using multiple block sizes should only be done after careful planning and performance evaluation.

  • Larger block size = faster DB?

    hi guys,
    (re Oracle 9.2.0.7)
    it seems that a larger block_size makes the database perform operations faster? Is this correct? if so, why would anyone use 2k block sizes?
    thanks

    Hi Howard,
    it's uncharted territory, especially at the higher blocksizes which seem to be less-well-tested than the smaller ones Yup, Oracle releases junkware all the time, untested poo . . .
    You complain, file a bug report, and wait for years while NOTHING happens . . .
    Tell me Howard, how incompetent does Oracle Corporation have to be not to test something as fundamental as blocksizes?
    I've seen Oracle tech support in-action, it's like watching the Keystone Cops:
    Oracle does not reveal the depth of their quality assurance testing, but many Oracle customers believe that Oracle does complete regression testing on major features of their software, such as blocksize. However, Oracle ACE Director Daniel Morgan, says that “The right size is 8K because that is the only size Oracle tests”, a serious allegation, given that the Oracle documentation, Oracle University and MetaLink all recommend non-standard blocksizes under special circumstances:
    - Large blocks gives more data transfer per I/O call.
    - Larger blocksizes provides less fragmentation (row chaining and row migration) of large objects (LOB, BLOB, CLOB)
    - Indexes like big blocks because index height can be lower and more space exists within the index branch nodes.
    - Moving indexes to a larger blocksize saves disk space. Oracle says "you will conserve about 4% of data storage (4GB on every 100GB) for every large index in your database by moving from a 2KB database block size to an 8KB database block size."
    So, does Oracle really not do testing with non-standard blocksizes? Oracle ACE Director Morgan says that he was quoting Bryn Llewellyn of Oracle Corporation:
    “Which brings us full circle to the statement Brynn made to me and that I have repeated several times in this thread. Oracle only tests 8K blocks.”
    Wow.
    Edited by: jkestely on Sep 1, 2008 2:52 PM

Maybe you are looking for