LPAR - LOGICAL PARTITION QUESTION -

Hello SDN Experts.
LPAR (LOGICAL PARTITION QUESTION)
Our current Production Environment is running in Distributed Installation on
IBM System P5 570 Servers AIX ver 5.2, each node is running two Applications: SAP ERP 2005  SR1 (ABAP + JAVA)  and CSS. (Customer Service System)
Node One
u2022     SAP Application (Central Instance, Central Services)
u2022     Oracle 9i Instance for CSS Application.
Node Two.
u2022     Oracle 10G Instance for SAP Application
u2022     CSS Application.
To improve performance we are planning to create a new LPAR for SAP.
According to the IBM HW Partner LPAR is logically isolated with different HW/SW resource(CPU/Memory /Disk resource, IP/hostname/mount point)...
Question:
I have this two possible solutions to copy SAP instances (app + db)  to new LPAR, can I apply SCENARIO 2, which in my opinion is easier than SCENARIO 1.
SCENARIO 1.
In order to migrate application and database instances to the new LPAR do I need to follow the procedure explained in the guide:
(*) System Copy for SAP Systems Based on SAP NetWeaver 2004s SR1 ABAP+Java Document version: 1.1 ‒ 08/18/2006
SCENARIO 2.
After create all file systems (required in AIX) to copy data from Applications and Database Instances to their respective LPARs and change the ip address and hostnames in parameter files according to the following SAP Notes:
Note 8307 - Changing host name on R3 host
Note 403708 - Changing an IP address
Which is the best scenario SAP recommends in this case ?
Thanks for your comments.

If your system is a combined ABAP + Java instance you can´t manually change the hostname. It´s not only those places that are listed in that note but much more, partially on filesystems in .properties files, partially in the database.
Doing that manually may work but since the process is not documented anywhere and since it depends on the applications running on top of the J2EE instance it´s not supported.
For ABAP + Java instances you must use the "sapinst-way" to get support in case of problems.
See note 757692 - Changing the hostname for J2EE Engine 6.40/7.0 installation
Markus

Similar Messages

  • Use of Logical Partition in a Oracle Table...

    What is the use of Logical Partition in a Oracle Table as Target.  Techincal Manual does not say any significance.
    My question is:
    If the Table has no partitions and if we add Logical Partitions using Data Service, what purpose will it serve?
    We are planning to load 30 Million records a day into a Oracle Table. As of now the Target table has no partition and we are planning to have that soon. Is there a better way to load the data into Target Table, using Partition, Bulk Loading(API), Degree of Parallelism, etc., We have not dealt data of that volume, inputs are highly appreciated.
    Regards.
    Santosh.

    Initial Value:
    Indicator that NOT NULL is forced for this field
    Use
    Select this flag if a field to be inserted in the database is to be filled with initial values. The initial value used depends on the data type of the field.
    Please note that fields in the database for which the this flag is not set can also be filled with initial values.
    When you create a table, all fields of the table can be defined as NOT NULL and filled with an initial value. The same applies when converting the table. Only when new fields are added or inserted, are these filled with initial values. An exception is key fields. These are always filled automatically with initial values.
    Restrictions and notes:
    The initial value cannot be set for fields of data types LCHR, LRAW, and RAW. If the field length is greater than 32, the initial flag cannot be set for fields of data type NUMC.
    If a new field is inserted in the table and the initial flag is set, the complete table is scanned on activation and an UPDATE is made to the new field. This can be very time-consuming.
    If the initial flag is set for an included structure, this means that the attributes from the structure are transferred. That is, exactly those fields which are marked as initial in the definition have this attribute in the table as well.
    hope it helps,
    Saipriya

  • 0IC_C03 related Inventory Process - Logical Partitioning (Vs) Physical Part

    Hello Everyone,
    After going through multiple postings throughout the form and documentation from SAP, it states that the 0IC_C03 InfoCube when used with Non Cumulative keyfigures is not recommended to be partitioned logically by physical year/calendar year as the query will read all the data sets due to the stock marker logic.
    In our specific scenario,
    1. After the InfoCube (0IC_C03) was enhanced with additional characterisitcs such as Doc Number, Movement type and so on due to business requirements I was not able to actually use the Non Cumulative Keyfigures as they were not populated within the report.
    2. So, we decided not to use the Non Cumulative keyfigures but rather create two cumulative keyfigures (Issue Stock Quantity - Receipt Stock Quantity) and (Issue Valuated Stock Value - Receipt Valuated Stock Value) and both of these are available in the InfoCube and are calculated during the update process.
    These two keyfigures are cumulative with exception aggregation of LAST based on 0CALDAY.
    The question is,
    Since we are not using the actual Non Cumulative Keyfigures (even though we are not using these, we still have the stock marker updated and data compressed based on this along with Validity table defined) can we do logical partitioning on the InfoCube based on Calendar year.
    Thanks....

    Hello Elango,
    Appreciate your response.
    First of..I do understand the difference between logical and physical partitioning and the question is not about joining them together.
    I am sorry, if others cannot understand the detailed issue posted. My apologies was a part of polite gesture, and please do respond back with proper precise answer if you think you did actually understand the question....
    The question here is about how I can leverage the performance and administrative performance by logically breakingdown the data.
    The issues due to which I am trying to look into different aspects of logical partitioning are:
    1. If I do logical partitioning by Plant due to the stock marker logic then I cannot do archiving as a Plant and its related data cannot be archived by time characteristic as the partitioning is not done by time characteristic.
    2. The reason I would have to have document number and movement type in the InfoCube is due to the kind of reporting users perform.
    We have a third party system whose data needs to be reconciled to the data in the plants and storage locations.
    And in order to do so, the first step users would be running the report is plant, storage location and sku. From here on for the storage locations which have balance they would like to drill down on to the document number and movement type to see what the actual activity is.
    So, to support this requirement I would have to have the above characterisitcs in the InfoCube.
    The question again is,.....what is the exact list of issues I would be having doing the logical partitioning by time characteristic.
    Once again, even though the non cumulative keyfigures are available in the InfoCube we are not using them for any reporting purpose....so please keep that in consideration while replying back.
    Thanks
    Dharma.

  • Standalone report server not found on the network between logical partitions on AIX

    Hello,
    Here s our architecture:
    forms/reports11gr2(patchset 1)
    weblogic 10.3.6
    on IBM AIX 7.1
    Server JRE
    Java(TM) SE Runtime Environment (build pap6460sr13ifix-20130303_02(SR13+IV37419)                                                                             )
    IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 AIX ppc64-64 jvmap6460sr13-20130114_1
    JRE -client - 1.6.0_27
    We have 2 logical partitions separated in 2 different physical m/c where cluster of forms /reports is installed.
    if i have a report service repsrv11g on one logical partition say, box 100 on Physical box 6000, the other logical partition box101's forms server on physical box 7000 is not able to look up the report service when calling from Forms using the Run_report_object.
    Gives, FRM -41213 error.
    If i just run the URL(use 2nd box) with http://101:8888/reports/rwservlet/showjobs?server=repsrv11g, it gives REP-51002: Bind to Reports Server repsrv11g failed
    We thought/read that as long as they re on the same network / domain, report service is available.
    Also did rwdiag.sh on one partition, its not able to find the other one.
    Ran the test form which Oracle provides and it s also not able to find the report server on the network when run on the other lpar.
    Temporarily, we created another report service on the other lpar but still using loadbalancing dns while doing web.show_document, so, it could potentially fail to bring up a report if load balancer redirects from one form's server to report server on the other partition.
    Any thoughts would be greatly appreciated.
    Thanks.

    Hello,
    Any inputs on this pls?

  • NW 7.3 specific - Database partitioning on top of logical partitioning

    Hello folks,
    In NW 7.3, I would like to know if it is possible to add a specific database partition rule on top of a logical partitioned cube. For example, if I have a LP cube by fiscal year - I would also like to specifically partition all generated cubes at DB level. I could not find any option in the GUI. In addition, each generated cube can be viewed only (cannot be changed in the GUI). Would anybody know if it is possible?
    Thank you
    Ioan

    Fair point! Let me explain more in details what I am looking for - in 7.0x, a cube can be partitioned at the DB level by fiscal period. Let's suppose my cube has only fiscal year 2011 data. If I partition the cube at the DB level by fiscal period in 12 buckets, I will get 12 distinct partitions (E table only) in the database. If the user runs a query on 06/2012, then the DB will search for the data only in the 06/2012 bucket - this is obviously faster than  browsing entire cube (even with indexes).
    In 7.3, cubes can be logical partitioned (LP). I created a LP by fiscal year - so far so good. Now, I would like to partition at the DB level each individual cube created by the LP. Right now I could not - this means that my fiscal year 2012 cube will have entire data residing in only 1 large partition, so a 06/2012 query will take longer (in theory).
    So my question is --> "Is it possible to partition a cube generated by a LP in fiscal period buckets"? I believe the answers is no right now (Dec 2011).
    By the way, all the above is true in a RDBMS environment - this is not a concern for BWA / HANA since data is column based and stored in RAM (not same technology as RDBMS).
    I hope this clarifies by question
    Thank you
    Ioan

  • Why logical partition is a must for voting disk and OCR

    Hi Guys,
    I just started handling jobs for RAC installation, I have a simple question regarding the setup.
    Why does logical partition have to be used for voting disk and OCR?
    I tried partition the disk that were provisioned for voting disk and OCR with primary partition but when OUI is trying to recognize the disk, it cannot find the disk that has been partitioned with primary partition.
    Thank you,
    Adhika

    Hello Adhika,
    I found it on this doc http://download.oracle.com/docs/cd/B28359_01/install.111/b28250/storage.htm
    Be aware of the following restrictions for partitions:
    * You cannot use primary partitions for storing Oracle Clusterware files while running the OUI to install Oracle Clusterware as described in Chapter 5, "Installing Oracle Clusterware". You must create logical drives inside extended partitions for the disks to be used by Oracle Clusterware files and Oracle ASM.
    * With 32-bit Windows, you cannot create more than four primary disk partitions for each disk. One of the primary partitions can be an extend partition, which can then be subdivided into multiple logical partitions.
    * You can assign mount points only to primary partitions and logical drives.
    * You must create logical drives inside extended partitions for the disks to be used by Oracle Clusterware files and Oracle ASM.
    * Oracle recommends that you limit the number of partitions you create on a single disk to prevent disk contention. Therefore, you may prefer to use extended partitions rather than primary partitions.
    For these reasons, you might prefer to use extended partitions for storing Oracle software files and not primary partitions.
    All the best,
    Rodrigo Mufalani
    http://www.mrdba.com.br/mufalani

  • [SOLVED] Longwinded beginner - Dual-boot & partition questions

    Hello,
    I'm interested in installing Arch Linux alongside Windows XP (dual-boot). I have little previous linux experience, although I have rented some servers that have used it in the past, as well as compiling some stuff with it while at University (studying Computer Science). Nevertheless, I am relatively confident that if I can still boot into XP, I will be able to acccustomise myself and like the fact that this distribution seems to be hands-on and leaves a lot up to the user.
    I've been reading the Beginner's Guide and the dual boot guide, and I would like to get started, however, I'm not going to go ahead with this until I am certain that I will be left with a system that can still boot into Windows XP. I assume that it'll take me a while to get to grips with Arch, and in the meantime it would be massively inconvenient if I couldn't work/play/etc...
    What I already know
    Anyway, currently I have a 250GB hard drive that I use for Windows (as well as 3 other hard drives full of stuff). I have partitioned the drive with Windows XP on it with gparted like so:
    (in order)
    UNALLOCATED                         32GB
    SDB1 (Windows XP)                 50GB
    SDB2 (Downloads)                  150GB
    I hope to use the unallocated space to hold linux (and then have access to my other windows drives in the future, using ntfs-3g), however, I am a little confused over what partitions I 'should' have and how large they should be, considering that I will use the OS to mainly develop, browse the web, listen to music, etc...
    I was thinking:
    /boot    -- ext2         -- 100MB
    /          -- ext4         -- 15GB
    swap    --                -- 1GB
    /home  -- ext4         -- 12GB
    /var     -- ReiserFS   -- 4GB
    Questions
    • Is 30GB too little, even though most of my stuff is on other NTFS hard drives?
    • How large should / be? I've read that it contains /bin, /dev, /etc and others. How do I know how much space these need? Am I misunderstanding things?
    • Is a /var partition unnecessary? How large should it be?
    • 10GB for /home, 1GB for swap, 100MB for /boot?
    • Do I need a /tmp or /usr? This is a single-user machine, but I don't want it to get messy!
    • I was thinking of giving /boot ext2, and /var ReiserFS, and then giving every other partition ext4. That okay?
    • Do I need to set these partitions up when installing, or can I set them up in advance with gparted - it might be simpler.
    • Due to already having 2 NTFS primary partitions on the hard disk, I presume that some of the above will need to be logical partitions in an extended partition? How is this done?
    Once the partitions have been set up, and linux is installed, I presume it's just a matter of completing the rest of Part I of the guide, and then ammending /boot/grub/menu.lst to include 'Windows XP'? At that point I am able to restart Windows XP, and only delve into Arch when I want to continue with the configuration, fixing, and so on...
    Sorry for the wall of text, and thanks for your patience. (:
    Last edited by Bedtimes (2009-09-27 14:21:55)

    That's the thing, I expect that I'm doing something wrong with the GRUB loader - and I admit my hard disk layout has been quite strange for a long time before installing linux.
    Basically, it currently looks like this:
    /dev/sda1    ntfs    Music           250GB
    /dev/sdb3    ext2   /boot           120MB
    /dev/dsb4    extended
    ---- /dev/sdb5    linux-swap       1GB
    ---- /dev/sdb6    ext4    /           20GB
    ---- /dev/sdb7    ext4    /home   12GB
    /dev/sdb1    ntfs    Windows XP  50GB
    /dev/sdb2    ntfs    Downloads    150GB
    /dev/sdc1    ntfs    TV & Movies   950GB
    • This list is in order that the entries appear on the hard disk, hence /boot is in the first 1024 cylinders of the hard disk, but as you can see the sdb numbers are actually in the chronological order that I created them.
    • I used an extended partition with logical partitions inside since I had read that there was an issue with more than 4 partitions in a hard disk, and I already had 2 NTFS partitions.
    • When it asked me to install GRUB to the MBR, I installed it to SDB as opposed to SDBx as it asked me to in the manual. This is the drive that contains /boot!
    • I just managed to amend something in the menu.lst, in order that I can boot into Windows XP. Therefore my machine is not totally fucked up any more. (: Unfortunately, what I changed doesn't make sense to me, since I would have expected Windows XP to be on a different hard disk.
    The contents of sdb3:
    grub    kernel26-fallback.img    kernel26.img
    lost+found    System.map26    umlinuz26
    When typing the command /sbin/blkid:
    /dev/sda1: UUID="D0..." LABEL="Music" TYPE="ntfs"
    /dev/sdb1: UUID="A8..." LABEL="Windows XP" TYPE="ntfs"
    /dev/sdb2: UUID="557..." LABEL="Downloads" TYPE="ntfs"
    /dev/sdb3: UUID="2676..." TYPE="ext2"
    /dev/sdb5: UUID="0474..." TYPE="swap"
    /dev/sdb6: UUID="0886..." TYPE="ext4"
    /dev/sdb7: UUID="519becf..." TYPE="ext4"
    /dev/sdc1: UUID="46AC59" LABEL="TV & Movies" TYPE="ntfs"
    Inside /boot/grub/menu.lst:
    timeout 5
    default 0
    color light-blue/black light-cyan/blue
    # (1) Windows XP
    title Windows XP
    rootnoverify (hd0,0)
    chainloader +1
    # (2) Arch Linux
    title Arch Linux
    root (hd1,5)
    kernel /boot/vmlinuz root=/dev/disk/by-uuid/0886... ro vga=773
    initrd          /boot/kernel26.img
    # (3) Arch Linux (Fallback)
    title Arch Linux (Fallback)
    root (hd1,5)
    kernel /boot/vmlinuz root=/dev/disk/by-uuid/0886... ro vga=773
    initrd          /boot/kernel26-fallback.img
    edit: I'm able to access all of the installation partitions with gparted-live's terminal (by mounting the devices I need to access into folders in my root folder), so is there anything else you want me to check/change in order to find my linux root/boot partition?
    Last edited by Bedtimes (2009-09-27 12:54:24)

  • Any real reason for logical partitioning over physical?

    Hi!
    I have seen a number of scenarios where SAP BI (assuming BI 7.0 for the rest of the discussion), running in high volume scenarios, have been cluttered by a lot of logically partitioned cubes joined by multi providers....
    Obviously the disadvantage of using logical partitions is that it increases maintenance efforts: you need a new update rule for each logical partition (cube) , then you need to manually add/delte cubes from the multiprovider, filtering data in the update rules to reach the correct cube based on time characteristic etc etc...
    I have seen one clear advantage which is the parallelization of queries run against a multiprovider - assuming you wan't to all underlying cubes ... but are there any other advantages which overcome the mainenance overhead?
    For me it feels like using physical database partitions in the same cube would be the correct decision in 90% of the cases. It seems to me that the underlying RDBMS should be able to handle itself to:
    1) Parallellize a query over several physical partitions if needed.
    2) Be smart enough to only query the needed partition if the query is restricted based on the partitioning characteristic.
    Please correct me anyone? - When is logical partitions really motivated?
    Best regards,
    Christian
    Edited by: Christian on May 15, 2008 3:55 PM

    Hi,
    This is a great question. Generally it is very difficult to understand the real motivation for the physical partioning - multiple cubes. You are right, it definitely increases the maintenance overhead. And you have already pointed out both the advantages and disadvantages.
    Physical Partitioning is more useful where we have huge amounts of data. Imagine a cube with 3 or 4 GB of data - which are not usual - but possible.  The Table Partioning is useful with small infocubes, less than 1 GB. With bigger Infocubes, Table level partitioning may not provide the required level of performance. If we have too many small partitions, that would also reduce the perfomance. If we have too few  partitions, the query performance will not be as much as we want. In this scenario, we can use Physical partitioning (Multiple Cubes) combined with Table Level Partitioning to achieve the required performance levels. On top, we can even think of using Aggregates for further betterment of the performance.
    While all the above seems to be relevant for older versions of BW (upto 3.5), BI 7.0 has the BIA (BI Accelerator), which works on the Blade Server with all the data cached directly on the main memory. I am not sure how much this would impact the data modeling - I have not started working on the BIA as yet.
    rgds
    naga

  • Impact of logical partitioning on BIA

    We are on BI release 701, SP 5. We are planning to create logical partition for some of the infocubes in our system. These cubes are already BIA enabled, so will creation of logical indexes have any impact on the BIA or improve the BIA rollup runtime

    Hi Leonel,
    Logical partitioning will have impact on BIA in terms of performance .
    Current cube is already indexed on BIA. Now if you divide current cube data into different cubes and create multiprovider on it then each cube will have its own F table index on BIA.
    you have to put new cubes to BIA and Execute Initial filling step on BIA and place rollups for respective cubes.
    Point to be noted :
    As Data will be deleted from Current cube and move to other cubes , Indexes will not get deleted
    from corresponding F table indexes in BIA . There will be indexes for records which are not present in cube. Thus it is always a good practice to flush BIA which will remove all the indexes from BIA for current cube and create new Indexes on BIA.
    In this case , we will have consistent indexes on BIA which will not hamper performance.
    This will also improve rollup time as data will be less in each cube after logical partitioning. For rollup
    improvement time , we can implement Delta indexing on BIA as well.
    Question : Why do we want to create logical partitioning for cubes which are present on BIA as queries will never come to Cubes in BI system ?
    Regards,
    Kishanlal Kumawat.

  • Logical Partitioning -How much %age improvement.

    Hello All,
    I want to know if we use Logical partitioning for cube then how much %age performance will be improve and how about its depends on loaded data into the cube.
    Q1:Logical partitioning of cubes - How much % of performance can be expected.
    Q2:How much % of performance improvement can be expected if say no. of records in cube is half the current no. of records.
    Thanks in advance,
    Bandana.

    Bandana,
    I would say that the question is incorrect ..( Carl Sagan..? )
    First - the percentage improvement depends on the amount of data - if my cube has 100 records and I do a logical partition I will not see any improvement - but then ramp up the numbers to 1 billion records - I see very significant improvements..
    Lets say your cube has data for 2010 and 2011 and you want to logically separate out the data . Typically logical partitions would mean separating out the data into separate cubes by year / country etc - any partition that is not using the default partitioning of 0FISCPER / 0CALMONTH etc...
    If this is not what you are looking at or if I am wrong - please correct me here...
    Now if you split your cube into two parts - one cube for 2010 and one for 2011 - the time required to access the data in the 2010 cube decreases - because it has a cube of its own - this is the benefit you get in terms of data access times - but then this is not going to be a 100% improvement because you have a cube for each year... it depends to a large extent on the query also - but in almost all cases you will have a performance benefit.
    I am not able to understand your second question.

  • Trying to run from logical partition - GRUB doesn't detect it

    Hi all, I only have a single SSD in my laptop with the following setup (in order on disk):
    sda1 - System Reserved
    sda2 - Windows 8
    sda4 - Extended Partition
        sda5 - Arch 2 (this is where I'm trying to move my Arch install to)
    sda3 - Arch (this is the one I can actually boot into
    (unallocated space)
    This is what I want things to look like eventually:
    sda1 - System Reserved
    sda2 - Windows 8
    sda3 - Extended Partition
        sda4 - Arch
        sda5 - some other distro
        sda6 - storage (I want this accessible by all OSes)
    Here's how I think I'm going to do this:
    - move Arch installation from after the extended partition to before it (since the extended partition is a big chunk of mostly free space, and I would like it to go last.
    - after verifying that I can boot the Arch installation from the logical partition, delete the Arch installation on the primary partition and absorb the free space into the extended partition
    Here's what I've done:
    - used 'dd if=/dev/sda3 of=/dev/sda5 bs=4096 conv=notrunc,noerror' to clone my working Arch installation to the logical partition (sda5).
    - ran 'grub-mkconfig -o /boot/grub/grub.cfg' to update GRUB and add the logical partition to the list of things to choose from in the Grub menu.
    After running 'grub-mkconfig' nothing new was added to the menu. 
    Questions:
    - Is what I want to do even possible?
         - can I have the logical partition marked for storage be accessible to Windows 8?
    - Is there an easier way to do what I want to do?
    - How do I get GRUB to detect the logical partition?
         - Is it possible to boot into a logical partition?

    grub doesn't always automatically pick up arch installations. Is os-prober installed? You can always add the entry manually e.g. in 40_custom or whatever. Don't forget to adjust fstab.
    Just to be clear: you have an MBR formatted disk and are talking about traditional logical partitions, right? Not lvm?
    Last edited by cfr (2013-02-02 01:31:48)

  • Finding whole mapping from database file - filesystems - logical volume manager - logical partitions

    Hello,
    Trying to make reverse engeneering of database files and their physical carriers on logical partitions ( fdisk ).
    And not able to make whole path from filesystem down to partitions with intermediate logical volumes.
    1. select from dba_data_files ...
    2. df -k
    to get the listing of filesystems
    3. vgdisplay
    4. lvdisplay
    5. cat /proc/partitions
    6. fdisk /dev/sda -l
       fdisk /dev/sdb -l
    Problem I have is that not able to determine which partitions are consisten in logical volumes. And then which logical volumens are consisted in filesystem.
    Thank you for hint or direction.

    Hello Wadhah,
    Before start the discussion let me explain I am newcommer to Oracle Linux. My genetic with dba experience of Oracle is from IBM UNIX ( AIX 6.1 ) and Oracle 11gr2.
    First task is to get the complete picture of one database on Oracle Linux for future maintenance tasks and make database more flexible and
    preparing for more intense work:
    -adding datafiles,
    -optimize/replace archive redolog files on separated filesystem from ORACLE_BASE
    - separating auditing log files from $ORACLE_BASE to own filesystem
    - separating diag directory on separated file system ( logging, tracing )
    - adding/enlarging TEMP ts
    - adding/enlarging undo
    - enlarging redo for higher transaction rate ( to reduce number of switched per time perceived in alert_SID.log )
    - adding online redo and control files mirrors
    So in this context try to inspect content of the disk space from the highest logical level in V$, DBA views down to fdisk partitions.
    The idea was to go in these steps:
    1. select paths of present online redo groups, datafiles, controlfiles, temp, undo
       from V$, dba views
    2. For the paths got from the step 1
       locate filesystems and for those filesystems inspect which are on logical volumens and which are directly on partitions.
    3. For all used logical volumes locate the logical partitions and their disks /dev/sda, /dev/sdb, ...

  • Sql to create logical partitions

    Oracle: 10.2.0.5
    I am working with another group and they are pulling data from one of the databases I work on. They are using what they call 'logical partitions'. Basically it is a sql statement with a MOD function in the where clause.
    select *
    from table
    where mod(field,10) = 0This allows them to divide the table up into 10 chunks. So they run 10 sessions to pull data across the network. They are using array processing(1000 records at a time) in a 3rd party tool to pull the data and write it to teradata. I have no ability to change this process to something else. They are not using a cursor. its just a fetch of 1000 recorsd at a time. I checked that first.
    The MOD function forces a full table scan. Before I go and and add a bunch of function based indexes to support this, does anyone know of another way to write these sqls without having to have a function on the left side of the where clause and get it to use an index? I want an index in part because 10 sessions is too slow to pull the data in an acceptable time so I want to increase the number of sessions i can handle. We are pulling from a number of tables so if its all full table scans I am far more constrained on my side.
    I am hoping there is a way to on the fly chunk a table with buckets or something and use an index. So I can ramp this up to say 20-30 sessions per table so each session gets 1/20 or 1/30 of the table.

    Guess2 wrote:
    Oracle: 10.2.0.5
    I am working with another group and they are pulling data from one of the databases I work on. They are using what they call 'logical partitions'. Basically it is a sql statement with a MOD function in the where clause.
    select *
    from table
    where mod(field,10) = 0This allows them to divide the table up into 10 chunks. So they run 10 sessions to pull data across the network. They are using array processing(1000 records at a time) in a 3rd party tool to pull the data and write it to teradata. I have no ability to change this process to something else. They are not using a cursor. its just a fetch of 1000 recorsd at a time. I checked that first.
    The MOD function forces a full table scan. Before I go and and add a bunch of function based indexes to support this, does anyone know of another way to write these sqls without having to have a function on the left side of the where clause and get it to use an index? I want an index in part because 10 sessions is too slow to pull the data in an acceptable time so I want to increase the number of sessions i can handle. We are pulling from a number of tables so if its all full table scans I am far more constrained on my side.
    I am hoping there is a way to on the fly chunk a table with buckets or something and use an index. So I can ramp this up to say 20-30 sessions per table so each session gets 1/20 or 1/30 of the table.From the school of thought that if some is good, then more is better.
    I suspect that the spindle upon which this table resides will be saturated
    with I/O requests long before 20 is reached.
    Session (CPU) is 100 - 1000 times faster than mechanical disk.
    Sessions as few as a half dozen can overwhelm single disk drive.

  • Logical Partition of infocube

    We have a cube logical partitioned by sales org.  We now would like to add additional sales org's to the list. Is this possible??  Currently with data in the cube it is blocked.  Does anyone know steps to re-logical partition a cube??  With/or without data re-loading?  Any options at the database level (oracle)?
    Thanks!!

    Hi,
    Yes that is correct you cant do cube partitioning with existing data.
    Another important thing is based upon selection conditions only you can do the partitioning.  There are two selection conditions they are as follows.
                   1. 0CALMONTH
                   2. 0FISCAL YEAR PERIOD
    if these two time characteristics were not used in the cube means you can't partition the cube. 
    STEPS TO PARTITION THE INFOCUBE:
    for example if you have Sales and Distribution cube you want to do logical partitioning on that cube. 
    we have to create infocube with same structure ( this we can do by giving the technical name of the cube in copy from option) at the time of creation of cube. Activate the cube and come back.
    then you should create update rules for that cube.
    select the first cube right click on it choose the option generate export datasource.
    the system will finish generation process.
    select the infosource option on the left side of the AWB.
    in that we have DATAMART select it choose refresh icon. 
    you get the infosource with datasource assignment.
    select the data source right click on it choose create infopackage.
    it will take you to the maintain infopackage screen. 
    here you need to select the datatarget tab and select the target into which the data need to be updated.
    schedule it and start.
    now manage the data target and check the data has been updated or not.
    after doing all this process go to infoprovider select the first cube delete the data.
    now depending upon your requirement do the partitioning.
    double click on the cube it will take you to the edit infocube screen from the menu bar select extras it will display the partitioning option select it.
    then it will popup one small wizard showing the time characteristics based upon your requirement choose which ever you want check that box and continue the process.
    Hope this will help you
    Thanks and Regards
    Vara Prasad

  • Logical partitioning, pass-through layer, query pruning

    Hi,
    I am dealing with performance guidelines for BW and encountered few interesting topics, which however I do not fully undestand.
    1. Mainetance of logical partitioning.
    Let's assume logical partitioning is performed on year. Does it mean that every year or so it is necessary to create additional cube/transformation and modify multiprovider? Is there any automatic procedure by SAP that supports creation of new objects, or it is fully manual?
    2.Pass- though layer.
    There are very few information about this basic concept.  Anyway:
    - is pass through DSO write optimized one? Does it store only one loading - last one? Is it deleted after lading is sucessfully finished (or before new load starts)? And - does this deletion do not destroy delta mechanism? Is the DSO replacing PSAfunctionally (i.e. PSA can be deleted every load as well)?
    3. Query pruning
    Does this happen automatically on DB level, or additional developments with exits variables, steering tables and FMs is required?
    4. DSOs for master data loads
    What is the benefit of using full MD extraction and DSO delta insetad of MD delta extraction?
    Thanks,
    Marcin

    1. Mainetance of logical partitioning.
    Let's assume logical partitioning is performed on year. Does it mean that every year or so it is necessary to create additional cube/transformation and modify multiprovider? Is there any automatic procedure by SAP that supports creation of new objects, or it is fully manual?
    Logical partitioning is when you have separate ODS / Cubes for separate Years etc ....
    There is no automated way - however if you want to you can physically partition the cubes using time periods and extend them regularly using the repartitioning options provided.
    2.Pass- though layer.
    There are very few information about this basic concept. Anyway:
    - is pass through DSO write optimized one? Does it store only one loading - last one? Is it deleted after lading is sucessfully finished (or before new load starts)? And - does this deletion do not destroy delta mechanism? Is the DSO replacing PSAfunctionally (i.e. PSA can be deleted every load as well)?
    Usually a pass through layer is used to
    1. Ensure data consistency
    2. Possibly use Deltas
    3. Additional transformations
    In a write optimized DSo - the request ID is key and hence delta is based on request ID. If you do not have any additional transformations - then a Write optimized DSO is essentially like your PSA.
    3. Query pruning
    Does this happen automatically on DB level, or additional developments with exits variables, steering tables and FMs is required?
    The query pruning - depends on the run based and cost based optimizers within the DB and not much control over how well you can influence the execution of a query other than havin up to date statistics , building aggregates etc etc.
    4. DSOs for master data loads
    What is the benefit of using full MD extraction and DSO delta insetad of MD delta extraction?
    It depends more on the data volumes and also the number of transformation required...
    If you have multiple levels of transformations - use a DSO or if you have very high data volumes and want to identify changed records - then use a DSO.

Maybe you are looking for

  • Safari can't handle gif 89a? Workaround?

    Back in 1998 I made some GIF 89a animations to aid some students and they worked fine. A few years later the animations stopped working. Now they work in Firefox but NOT in Safari 2.0.4 (419.3) My guess is that Safari only support animated GIFs if th

  • ITunes/QT not using video card

    I'm configuring a PC to attach to my TV, and have been downloading the HD episodes of Battlestar Galactica Season 4. The video playback quality is awful. Very jerky, the computer can't keep up. I thought it was the video card, and upgraded to an Nvid

  • Webutil not working properly after applying application server patch

    i have applied patch Patch 5983622 my application server verion is upgrade to 10.1.2.3 . patch is applied sucessfully but.foms which have webutil library not working. can any one tell me what should i do for make them working again.

  • Multi-threaded undercutting of dual locks in TestSTand 3.0

    Group,     I have a test sequence that is running 16 threads to test radios.  About midway into my testing one of the threads will some how take priority and will not wait for another thread to finish its testing.  The first time a thread executes a

  • ISE 1.0.4 - platform migration

    Does anyone know if you can backup and restore to unlike hardware. I am in a situation where we are working fine with 3355 as admin nodes and policy nodes but need to get more horsepower as we reach the endpoint limit. I will be at right around 13,00