Partitioning rules

Should partitioning be performed at the Cube and RDBMS level? Is this an AND or an OR issue where you can do both or only either one?
What are the things to look out for when partitioning?
What are some of the rules?

First - Partioning concept is related to Performance in BI.
Nor mallally it leads to Data loading Performnce & Query Performnce.
Plz find some of the points below related to Cube Partioning.
F-Fact tables and partitioning :
Similar to PSA tables the partitioning of F-fact tables is done automatically. The key difference is the partitioning key. SAP BI will create a new partition for every new load job which inserts data into the F-fact table of an InfoCube. The so-called request id is included in the InfoCube in the form of the package dimension. Therefore the key in the F-fact table which is used to join to the package dimension is also the partitioning key. In the example under b, 18 load jobs inserted test rows into the PSA table. These rows were loaded into the InfoCube via a Data Transfer Process DTP. Figure 11 shows 4 load requests in the InfoCube and figure 12 shows that there are 4 corresponding table partitions in the F-fact table. But how did we get 4 requests in the F-fact table while there are 18 requests in the PSA Table ? Here are a two things to consider in order to understand how a DTP inserts data into a F-fact table :
1. when a DTP in delta mode will be started immediately after a request was loaded into the PSA table it will create a new partition in the F-fact table to store the data.
This is what happened with the first three requests ( ID 413, 415 and 417 ) which can be seen on figure 11. It also shows that depending on the load pattern there might be partitions of very different size within the same fact table.
2. when <n> requests are loaded into the PSA table before the DTP starts then the
DTP will combine all of them into one request in the F-fact table. This is what happened with the other 15 PSA requests in the PSA table. Each of them had 110K rows. Figure 11 shows the total number of rows under u201CTransferredu201D for request ID 433. In addition to this the DTP will aggregate the data from the PSA requests by its key columns. In the sample the 15 PSA requests which were loaded into the InfoCube via one single DTP had 110K rows each and were identical regarding the customer dimension as well as the time dimension ( only two dimensions in the cube ). Aggregation in this case means that all rows with
the same dimension values will be combined and the u201Ckey figuresu201D will be calculated ( depending on the aggregation type u2013 typically u201Csumu201D ). That‟s why we see 1650000 rows transferred but only 110000 rows inserted into the InfoCube.
E-Fact tables and partitioning :
For E-fact tables a customer has the freedom to define the partitioning strategy based on the time dimension. Either month or fiscal year can be used to specify the partitions. The option can be found under u201CExtrasu201D -> u201CDB Performanceu201D when editing an InfoCube in the Administrator Workbench ( transaction RSA1 ). This can only be done as long as the InfoCube is empty and no data was loaded. NetWeaver 7.x offers a re-partitioning tool which allows to change the partitioning for InfoCubes which contain data.
Table partitioning and delete performance:
The main benefit of range table partitioning in SAP BI is the maintenance of huge tables. Especially when it‟s necessary to delete requests the difference might be factor 10, 100, 1000 or even more depending on the amount of data. With SQLServer2000 you found the appropriate delete statements in a ST05 trace for getting rid of a request. Now u2013 with table partitioning activated u2013 a ST05 trace will include the corresponding alter table / switch commands to delete a partition ( see figure 23 and 24 ). What took minutes or hours before now will be done within seconds. To get rid of a partition it‟s necessary to u201Cswitchu201D it out. This is a pure meta data operation and will convert the partition into a normal table which can be dropped. Afterwards a merge command is required to adapt the partition function. In the current implementation this might take some time which is still much less than deleting all the rows. There is a workaround available to avoid moving data during the merge by switching out the next partition too and switch it back in afterwards. But this is not feasible for SAP. The deletion of a request in a F-fact table will automatically delete the corresponding requests in all aggregates which were built on top of this InfoCube. But this works only as long as the u201Ccompression flagu201D ( described under item 4, ) is turned off. Otherwise all aggregates have to be recalculated or completely rebuilt which has a massive performance impact on the whole system. Therefore it‟s recommended to turn the flag off in case requests are deleted on a regular basis.
Figure
Regards
Ram.

Similar Messages

  • Partitioning on a table on the field TIMESTAMP WITH TIME ZONE

    Hi I have a very large size table which has grown to a size that we are not able to query it efficiently. We have decided to partition the table. But the issue is the table has a TIMESTAMP WITH TIME ZONE field and not DATE. I have found some links on the web which state this might cause an error. I am planning to create a temp table with the partition rules and at the same time copy data from the original one.
    using CREATE TABLE XYZ PARTITION BY RANGE (ABC) ( ---- Partition rules ------) NOLOGGING AS SELECT * FROM XYZ_ACTUAL where 1 = 2;
    Then if it works fine, I would rename the table with partitions to the actual name.
    Should all this be fine?
    The database is very critical. Hence the dilemma.

    Have you tried converting the timestamp with time zone to a character string as a partition key, possibly using an edit mask to control the timestamp components used?
    Your plan sounds OK to me - if you can get the partitioned table created - but I would test in a development first o see where the lLw of Unintended Consequences might decide to manifest itself.
    Edited by: riedelme on Dec 8, 2009 9:13 AM

  • NW 7.3 specific - Database partitioning on top of logical partitioning

    Hello folks,
    In NW 7.3, I would like to know if it is possible to add a specific database partition rule on top of a logical partitioned cube. For example, if I have a LP cube by fiscal year - I would also like to specifically partition all generated cubes at DB level. I could not find any option in the GUI. In addition, each generated cube can be viewed only (cannot be changed in the GUI). Would anybody know if it is possible?
    Thank you
    Ioan

    Fair point! Let me explain more in details what I am looking for - in 7.0x, a cube can be partitioned at the DB level by fiscal period. Let's suppose my cube has only fiscal year 2011 data. If I partition the cube at the DB level by fiscal period in 12 buckets, I will get 12 distinct partitions (E table only) in the database. If the user runs a query on 06/2012, then the DB will search for the data only in the 06/2012 bucket - this is obviously faster than  browsing entire cube (even with indexes).
    In 7.3, cubes can be logical partitioned (LP). I created a LP by fiscal year - so far so good. Now, I would like to partition at the DB level each individual cube created by the LP. Right now I could not - this means that my fiscal year 2012 cube will have entire data residing in only 1 large partition, so a 06/2012 query will take longer (in theory).
    So my question is --> "Is it possible to partition a cube generated by a LP in fiscal period buckets"? I believe the answers is no right now (Dec 2011).
    By the way, all the above is true in a RDBMS environment - this is not a concern for BWA / HANA since data is column based and stored in RAM (not same technology as RDBMS).
    I hope this clarifies by question
    Thank you
    Ioan

  • [solved] thunar and udev rules

    I want to hide my windows paritions (sda1 and sda2) in thunar.
    But even if i set udev rules... they still appear.
    → cat /etc/udev/rules.d/10-hide-partitions.rules
    KERNEL=="sda1",ENV{UDISKS_PRESENTATION_HIDE}="1"
    KERNEL=="sda2",ENV{UDISKS_PRESENTATION_HIDE}="1"
    → udevadm info --query=all -n /dev/sda1 |grep PRESENTATION
    E: UDISKS_PRESENTATION_HIDE=1
    E: UDISKS_PRESENTATION_NOPOLICY=0
    What am i doing wrong ?
    Last edited by xinit (2012-04-26 19:05:49)

    ty
    works

  • Hiding disks/partitions with udev stopped working!

    I used to place a file `/etc/udev/rules.d/99-hide-partitions.rules' with the content:
    KERNEL=="sda1", ENV{UDISKS_PRESENTATION_HIDE}:="1"
    KERNEL=="sda2", ENV{UDISKS_PRESENTATION_HIDE}:="1"
    KERNEL=="sda4", ENV{UDISKS_PRESENTATION_HIDE}:="1"
    which makes partitions other than sda3 invisible for my desktop users. After today's upgrade (gnome libraries) it stopped working. Any idea how to fix it or make it some other way?
    Best regards,
    /m

    Thanks, solved it.  I requested a merge with https://bbs.archlinux.org/viewtopic.php?pid=1091530.
    https://wiki.archlinux.org/index.php/Xf … _xfdesktop
    # cat /etc/udev/rules.d/hide-partitions.rules
    KERNEL=="sda1", ENV{UDISKS_IGNORE}="1"
    KERNEL=="sda2", ENV{UDISKS_IGNORE}="1"
    KERNEL=="sda3", ENV{UDISKS_IGNORE}="1"
    KERNEL=="sda4", ENV{UDISKS_IGNORE}="1"
    EDIT: updated to reflect proper syntax.
    Last edited by graysky (2015-05-05 19:35:35)

  • Excessive (?) cluster delays during shutdown of storage enabled node.

    We are experiencing significant delays when shutting down a storage enabled node. At the moment, this is happening in a benchmark environment. If these delays were to occur in production, however, they would push us well outside of our acceptable response times, so we are looking for ways to reduce/eliminate the delays.
    Some background:
    - We're running in a 'grid' style arrangement with a dedicated cache tier.
    - We're running our benchmarks with a vanilla distributed cache -- binary storage, no backups, no operations other than put/get.
    - We're allocating a relatively large number of partitions (1973), basing that number on the total potential cluster storage and the '50MB per partition' rule.
    - We're using JSW to manage startup/shutdown, calling DefaultCacheServer.main() to start the cache server, and using the shutdown hook (from the operational config) to shutdown the instance.
    - We're currently running all of the dedicated cache JVMs on a single machine (that won't be the case in production, of course), with a relatively higher ratio of JVMs to cores --> about 2 to 1.
    - We're using a simple benchmarking client that is issuing a combination of puts/gets against the distributed cache. The ids for these puts/gets are randomized (completely synthetic, i know).
    - We're currently handling all operations on the distributed service thread (i.e. thread count is zero).
    What we see:
    - When adding a new node to a cluster under steady load (~50% CPU idle avg) , there is a very slight degradation, but only very slight. There is no apparent pause, and the maximum operation times against the cluster might barely exceed ~100 ms.
    - When later removing that node from the cluster (kill the JVM, triggering the coherence supplied shutdown hook), there is an obvious, extended pause. During this time, the maximum operation times against the cluster are as high as 5, 10, or even 15 seconds.
    At the beginning of the pause, a client will see this message:
    2010-07-13 22:23:53.227/55.738 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): Member 8 left service Management with senior member 1
    During the length of the pause, the cache server logging indicates that primary partitions are being shuffled around.
    When the partition shuffle is complete, the clients become immediately responsive, and display these messages:
    2010-07-13 22:23:58.935/61.446 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): Member 8 left service hibL2-distributed with senior member 1
    2010-07-13 22:23:58.973/61.484 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): MemberLeft notification for Member 8 received from Member(Id=8, Timestamp=2010-07-13 22:23:21.378, Address=x.x.x.x:8001, MachineId=47282, Location=site:xxx.com,machine:xxx,process:30552,member:xxx-S02, Role=server)
    2010-07-13 22:23:58.973/61.484 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): Member(Id=8, Timestamp=2010-07-13 22:23:58.973, Address=x.x.x.x:8001, MachineId=47282, Location=site:xxx.com,machine:xxx,process:30552,member:xxx-S02, Role=server) left Cluster with senior member 1
    2010-07-13 22:23:59.135/61.646 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): TcpRing: disconnected from member 8 due to the peer departure
    Note that there was almost nothing actually in the entire cluster-wide cache at this point -- maybe 10 MB of data at most.
    Any thoughts on how we could eliminate (or nearly eliminate) these pauses on shutdown?

    Increasing the number of threads associated with the distributed service does not seem to have a noticable effect. I might try it in a larger scale test, just to make sure, but initial indications are not positive.
    From the client side, the operations seem hung behind the DistributedCache$BinaryMap.waitForPartitionRedistribution() method. The call stack is listed below.
    "main" prio=10 tid=0x09a75400 nid=0x6f02 in Object.wait() [0xb7452000]
    java.lang.Thread.State: TIMED_WAITING (on object monitor)
    at java.lang.Object.wait(Native Method)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForPartitionRedistribution(DistributedCache.CDB:96)
    - locked <0x9765c938> (a com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap$Contention)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForRedistribution(DistributedCache.CDB:10)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.ensureRequestTarget(DistributedCache.CDB:21)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.get(DistributedCache.CDB:16)
    at com.tangosol.util.ConverterCollections$ConverterMap.get(ConverterCollections.java:1547)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ViewMap.get(DistributedCache.CDB:1)
    at com.tangosol.coherence.component.util.SafeNamedCache.get(SafeNamedCache.CDB:1)
    at com.ea.nova.coherence.lt.GetRandomTask.main(GetRandomTask.java:90)
    Any help appreciated!

  • How big an HDD

    I've got an iMac/G3/500 with 30gbHDD (it's a DV Special I suspect).
    Can I replace the HDD with a bigger model and if so what is the bus-max and do I need to create a small first partition for the OS?

    The 8GB first partition rule only applies to 233-333MHz iMacs. 
    It's not an iMac DV Special Edition as those were 400MHz with 13GB drives. It's probably an iMac (Summer 2000) which were 500MHz and came with 30GB drives. Later 500MHz models only had 20Gb drives.
    cheers
    mrtotes

  • Considerations for an OS X install on a Beige

    Well, not being able to view QT movie trailers w/ the QT 6 Pro I paid 30 bucks for a couple of years ago was the last straw. It is now obvious that I need to update the Mac OS on my Beige or give up using it as my primary home computer. I need the OS update for effective net surfing and to run periodic apps no longer offered in OS 9 (e.g., Turbo Tax). So, I would greatly appreciate answers to the following questions and any other "words of wisdom" you may have regarding my intended OS update.
    First, is there any value in trying OS 10.2.8 or should I just go and try to install and use Panther? The last time I looked at OS X it was Jag on a "new" Mac a few years ago. It was so painfully slow that my 6100 would put it to shame. I understand that Panther finally got some quickness into OS X, but it was also not designed to be run on a Beige.
    My HDD is an 80 GB Maxtor on an ATA133 card. It has two partitions, 20 GB and 60 GB, w/ OS 9.2.2 on the 20 GB partition. As I understand it, the 8 GB partition rule doesn't apply to HDDs on PCI cards, correct?
    Is it proper or best to install OS X on the same partition as OS 9 or on a different partition?
    Other mods to my Beige include a 1 GHz, G3, Powerlogix processor upgrade, a USB 1.1 card, and a mutant Radeon 7000 video card (flashed Asian PC card w/ 64 MB VRAM). Will, could any of these present a problem for the recommended OS X vintage install?
    Thanks in advance for answers, comments, and recommendations.
    Carl B.

    Hi carl,
    I upgraded to Panther (via XPostFacto 3.1 of course) only last Sunday. It is, to me, a significant improvement over Jag, and the installation was a breeze by comparison. I removed nothing from the computer, even leaving in the Lite-On CD-RW drive that would not install Jag (see this thread).
    Speed differences are hard for me to judge. I was having some directory problems with the older boot drive, and I installed a fresh new drive as part of the Panther upgrade. I started my OSX experience with Jag. The speed was acceptable but not as frisky as was OS9. However, part of that is due to the Beige not using Quartz Extreme for finder functions. QE can be enabled, but may have some downsides if you do 3D gaming.
    Today, I would say go straight to Panther. I'm at 10.3.9 and all is stable.
    One caveat that may not bite you (1G processor advantage?) is related to QT. I, too, was keen to get QT7 so I could continue to enjoy the movie trailers. The new trailers are dreadfully slow and choppy, even with my 128MB VRAM vidcard. I tried some older saved QT movies with QT7 and they ran great. New content must need more power, It is my fear that the newer QT movies are optimized for a G4 and that's why all is so slow.
    You are right: the 8GB partition rule only applies to the logic board ATA bus.
    I did not install any ATI drivers other than those included with OSX. The Radeon 9200 is doing fine, shown by excellent performance in a flight sim that shouldn't really run on an 8 year-old machine. I recommend trying the flashed Radeon with native drivers--it should work.

  • [SOLVED / Workaround] Hiding a TrueCrypt volume

    I have a TrueCrypt file that contains my personal information. I mount it from the command line when I need it. How can I hide the mounted TrueCrypt volume from appearing in my file manager (PCManFM)?
    I tried creating a udev rule in "/etc/udev/rules.d/99-hide-partitions.rules":
    KERNEL=="dm-[0-9]*", ENV{UDISKS_IGNORE}:="1"
    It had a small effect: the volume still appears in the file manager but it has a different name.
    Here is the output from "udevadm monitor" when I mount the volume:
    KERNEL[720.311712] add /devices/virtual/bdi/0:34 (bdi)
    UDEV [720.329283] add /devices/virtual/bdi/0:34 (bdi)
    KERNEL[720.523306] change /devices/virtual/block/loop0 (block)
    KERNEL[720.525338] change /devices/virtual/block/loop0 (block)
    KERNEL[720.535095] add /devices/virtual/bdi/254:0 (bdi)
    UDEV [720.537028] add /devices/virtual/bdi/254:0 (bdi)
    KERNEL[720.537390] add /devices/virtual/block/dm-0 (block)
    UDEV [720.538406] add /devices/virtual/block/dm-0 (block)
    KERNEL[720.540339] change /devices/virtual/block/dm-0 (block)
    UDEV [720.558886] change /devices/virtual/block/loop0 (block)
    UDEV [720.588484] change /devices/virtual/block/loop0 (block)
    UDEV [720.612304] change /devices/virtual/block/dm-0 (block)
    How can I hide the volume from my file manager? Should I use udev or another method?
    Thank you!
    Last edited by drcouzelis (2014-01-01 00:14:01)

    drcouzelis wrote:and two, if "dm-0" is the udev name of the device I'm trying to hide...
    Yes, as far as I know dm-0 is correct, you can use it like that in a udev rule.
    Did you have a look at '/lib/udev/rules.d' , maybe you need to change "99" in 99-hide-partitions.rules, it needs to have a different number I guess, looking at the files in  '/lib/udev/rules.d'.
    I would call it 15-hide-partitions.rules, really I don't know if it works, but you could try
    I can't check it for you as I don't have a dm device on this machine.
    edit: Ah, so now I see what you mean, although I don't have a solution.
    I checked with Truecrypt volume on USB disk, all other USB's are correctly ignored, also the USB disk with truecrypt vol. on it. When I mount it, it shows up in the list.
    I tried to write a few rules for it, but they all fail, I use a little script to enable/disable it which is working fine for USB.
    I guess you already issued a command like this to find out more about your device:
    #udevadm info -a -p $(udevadm info -q path -n /dev/sd**)
    Because it looks like when you issue the command, dm-0 is not appearing in the list, I only see this with "udevadm monitor" , so I try some more, I think however, we need some help
    edit2: I have been playing around with udev, I just couldn't get it to work ,so I wrote a little script for it, maybe you can use it, with an alias in your zshrc
    It's just a little mount/unmount script, check your major/minor number, and change the mountpoint to yours.
    #!/bin/bash
    if [ "$(mountpoint -d /media/truecrypt1)" = "254:0" ] ; then
    sudo umount /media/truecrypt1
    echo "Device /media/truecrypt1 is unmounted"
    else
    sudo mount /dev/dm-0 /media/truecrypt1
    echo "Device /media/truecrypt1 is mounted"
    fi
    exit 0
    Last edited by qinohe (2013-10-31 12:36:08)

  • Partitioning on Oracle 8.0.6 (rule base vs. cost base)

    At my current engagement, we are using Oracle Financials 11.0.3 on Oracle 8.0.5 which uses rule-based optimizer. However, it is been planned to upgrade the database from Oracle 8.0.5 to Oracle 8.0.6 as well as implement Oracle partitioning. With this in mind, we are concerned about possible performance issues that the implementation of partitioning may cause since RBO does not recognize it.
    We agree that the RBO will see a non-partitioned table the same as a partitioned. In this scenario where you gain the most is with backup/recoverability and general maintenance of the partitioned table.
    Nevertheless, we have a few questions:
    When implementing partitions, will the optimizer choose to go with Cost base vs. Rule base for these partitioned tables?
    Is it possible that the optimizer might get confused with this?
    If this change form RBO to CBO does occur, the application could potential perform poorly because of the way it has been written.
    Please provide any feedback.
    thanks in advance.

    If the CBO is invoked when accessing these tables, you may run into problems.
    - You'll have to analyze your tables & ensure that the statistics are kept up to date.
    - It's possible that any SQL statements which invoke the CBO rather than the RBO will have different performance characteristics. The SYSTEM data dictionary tables, for example, must use the RBO or their performance suffers dramatically. Most of the time, the CBO beats the RBO, but applications which have been heavily tuned with the RBO may have problems with the CBO.
    - Check your init.ora to see what optimizer mode you're in. If you're set to CHOOSE, the CBO will be invoked whenever statistics are available on the table(s) involved. If you choose RULE, you'll only invoke the CBO when the RBO encounters situations it doesn't have rules for.
    Justin

  • Partitioning on Oracle 8i (Rule Based vs. Cost Based)

    At my current engagement, we are using Oracle Financials 11.0.3 on Oracle 8.0.6. The application uses rule-based optimizer. The client wants to implement Oracle partitioning. With this in mind, we are concerned about possible performance issues that the implementation of partitioning may cause since RBO does not recognize it.
    We agree that the RBO will see a non-partitioned table the same as a partitioned. In this scenario where you gain the most is with backup/recoverability and general maintenance of the partitioned table.
    Nevertheless, we have a few questions:
    When implementing partitions, will the optimizer choose to go with Cost base vs. Rule base for these partitioned tables?
    Is it possible that the optimizer might get confused with this?
    Could it degrade performance at the SQL level?
    If this change from RBO to CBO does occur, the application could potential perform poorly because of the way it has been written.
    Please provide any feedback.
    Thanks in advance.

    If the CBO is invoked when accessing these tables, you may run into problems.
    - You'll have to analyze your tables & ensure that the statistics are kept up to date.
    - It's possible that any SQL statements which invoke the CBO rather than the RBO will have different performance characteristics. The SYSTEM data dictionary tables, for example, must use the RBO or their performance suffers dramatically. Most of the time, the CBO beats the RBO, but applications which have been heavily tuned with the RBO may have problems with the CBO.
    - Check your init.ora to see what optimizer mode you're in. If you're set to CHOOSE, the CBO will be invoked whenever statistics are available on the table(s) involved. If you choose RULE, you'll only invoke the CBO when the RBO encounters situations it doesn't have rules for.
    Justin

  • Custom Design rules on table partitions

    Hi
    I need to create several custom design rule at the table partition level.
    for example one of the rule is that
    for all table partitions
      if a table partition name begins with M
        then it should not be compressed
        and also should not be in tablespace called xyzHow do i go about enforcing this rule using the design rules

    Hi,
    here is simple example, you can improve it easily. In fact you have two rules and it's better to create two rules.
    var ruleMessage;
    var errType;
    var table;
    //define the function
    function checkPartitions(){
    ruleMessage = "";
    model = table.getDesignPart();
    tp = model.getStorageDesign().getStorageObject(table.getObjectID());
    result = true;
    if(tp!=null){
      partitions = tp.getPartitions().toArray();
      for(var i=0;i<partitions.length;i++){
       partition = partitions;
    if(partition.getName().startsWith("M") && "YES".equals(partition.getDataSegmentCompression())){
    result = false;
    ruleMessage = "Partition " + partition.getName()+" for table "+tp.getLongName()+ " cannot be compressed";
    break;
    tablespace = partition.getTableSpace();
    if(tablespace!=null && "xyz".equals(tablespace.getName()) && partition.getName().startsWith("M")){
    result = false;
    ruleMessage = "Partition " + partition.getName()+" for table "+tp.getLongName()+ " cannot be in tablespace xyz";
    break;
    return result;
    //call the function
    checkPartitions();
    you should define it for "Table" object. And your physical model should be open.
    Philip
    Edited by: Philip Stoyanov on Jan 10, 2012 4:53 AM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • AAD Sync - updates to attribute and partition filter rules are not applied

    The first Attribute filter rules and/or Directory Partition filters we add with a new AAD Sync Installation work fine immediately.
    Any subsequent changes to the rules / new rules / removed rules / updated partition filters don't have an impact on the filtering behavior until we
    reboot the AAD Sync host.
    We've tried restarting the sync service, all sorts of Full Sync etc., nothing else helps.
    We don't have any duplicate rules (have those regularly due to the known bug, but always remove them).
    We're running AAD Sync build 1.0.0485.0222.
    Thanks for any suggestions.

    Thanks for your swift suggestions. We've set up a new environment now which finally works fine. We have the same issue at the customer's site where we'll verify next week (100k user accounts, filters are crucial there).
    Problem seems to have been the Full Import that we didn't do or didn't do in the right place (can no longer verify since the original test environment is now gone). We were probably doing Full Synchronization instead (as suggested in
    the documentation on MSDN).
    Still interesting that the reboots had helped every time - they don't imply a Full Import just by themselves I assume?
    Thanks again,
    René

  • Unable to transport transfer rules in BI 7(Partition in PSA)

    Hi,
    I want to activate T.R in Production but not able to that due to Partition in PSA.
    Errors accessing partition parameters for table /BIC/B0000341000 (-> long text)
    We used the programmed called RS_TRANSTRU_ACTIVATE_ALL but still the same error.
    Even when we transported (T.R) from QA to PRD not able to transport to PRD with error 8.In the error analysis it is giving the same MSG u201CPSA partitionu201D
    T.R: Transfer Rules
    Object Type: Master data Text(0ACTIONREAS_TXT)
    Thanks,
    Gattu.
    Thanks= Points in SDN

    Laxman,
    Try to repair PSA table at RSRV --> All Elementary Tests --> PSA Tables --> Consistency Between PSA Partitions and SAP Administration Information##.
    After repair try to activate.
    If still not working, then empty PSA data in production system (if not required...) and import transfer rules and activate if required.,
    Hope it Helps
    Srini

  • How do i get win 8.1. *pro* for mac parallels, deauthorizing win 8.1 pro and license rules for windows vms

    ok, i've seen this question asked a number of times.  i know the general procedure:
    - first download some installer things to a window box so i can make a bootable DVD / flash drive
    - go the mac and use the DVD / usb stick to install the virtual machine
    great.. BUT :
    1) unfortunately, i'm still really confused at the  most basic part which is where (link) do i actually get the initial download thing to create the 8.1 *PRO* from microsoft and where (link) / how do i pay for the subscription key to install the thing???? 
    i keep seeing a link for windows 8 or windows 8.1 where windows PRO is NEVER mentioned.  i ABSOLUTELY MUST HAVE a) PRO 64 bit and NOT vanilla windows and b) it MUST BE A FULL INSTALL (i'm not upgrading from windows 7, 8 or 8.1 NON pro..  i'm going
    to be installing this thing on a brand new system.
    i'm an IT guy for two decades and i can tell you trying to find, download, install a clean version of windows 8.1 PRO is a royal pita and nothing about windows 8.1 PRO is intuitive on the MS site.  it's almost as if they do NOT want you to install the
    thing.   i'm not trying to bust jewels here, i'm just being honest and providing feedback that you folks need to make this much more clear and intuitive.  people really shouldN'T have to post on forums to find out how to purchase, download and enter
    in a subscription key for a base version of an OS.  it's crazy.
    2) once i can figure out how to download, install and register the darn thing ;)... i'm curious how the licensing works.
    2a) first if my windows 8.1 pro experiment on mac parallels sucks, i want to be able to remove the authorization so i can reinstall 8.1 pro on another system.   can i / how do i do this in my scenario?
    2b) i actually already bought windows pro 8.1 about a year ago (to upgrade a laptop that came with 8 that i needed to upgrade to vanilla 8.1 then to finally to pro) so that i could windows phone 8.1 development with simulator on the laptop.  unfortunately
    the laptop maxed out at 4G and the performance of the emulator was ridiculously slow that it's unusable.  i'd love to somehow be able to put the laptop back to windows *vanilla* 8.1 and reclaim my 8.1 pro that i *already* purchased by removing the authorization
    from that laptop allowing me to reinstall the full 8.1 pro with the same key on the mac with parallels.  then i can sell the laptop that has sat there collecting dust for a year and regain my windows 8.1 pro investment vs having to buy another copy for
    the mac. 
    i have no idea how to do this though i think it should be possible.  can anybody tell me where / how i do this?
    note the win pro 8.1 install never went well so i actually had some guy from MS login to my laptop remotely and do the install.. in doing so i believe he wiped away the original windows vanilla 8.1 recovery partition asus put on the drive meaning i'm not
    sure i can get the laptop back to 8.1  or 8 vanilla so i can resell it functionally to another person while removing my 8.1 *pro* license from it.   any suggestions here highly welcome.
    2c)  once i get the windows 8.1 pro installed in a vm under parallels, i'll have one licensed copy running.  my guess is that is what i'm limited to.  but when you start working with VMs the whole purpose is to be able to create multiple test
    enviornments and dispose of them when you don't need them etc.   since the window will be tied to the single mac hw id, will i be able to install multiple *win* vms (duping an exising win vm for instance) from the one license  or must i purchase
    multiple win licenses for each win vm on the SAME system?   if you need a new license for each vm, wouldn't i have to do a FULL WINDOWS INSTALL every time i wanted to create another windows VM.. THAT IS CRAZILY INEFFICIENT...
    there has to be some kind of option to dup a windows licensed vm on the SAME system yes?  what are the rules / procedures for making this work.
    i'm using parallels on mac as an example but the same question would apply to using hyperV (which i know nothing about at this point) as the virtual host platform as well.
    i'm open to well thought out links that actually answer these questions succinctly in addition to comments here
    thanks in advance

    Hello bigbobber,
    If  you have issue in using Windows 8.1 Pro in the software  parallels in MAC?
    About this issue, it is recommended to contact the related parallels support.
    About the license issue, please read the following article about Buyer’s Guide and Microsoft software license agreement windows 8.1.
    http://www.microsoft.com/licensing/about-licensing/windows8-1.aspx
    It is recommended to contact the phone support if you have license issue.
    Best regards,
    Fangzhou CHEN
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact [email protected]

Maybe you are looking for