Segmentation process

Hi,
Can anyone help me out using the Segment Builder for the following scenario. We have a customer structure that looks like this
1. Corporate group (like Walmart)
ex attribute: size
2. Individual stores
ex attribute: address
3. Contact persons for the individual stores
ex attribute: hobbies
The connection between these three levels are standard relationships.
The segmentation process looks like this.
First, do a segmentation on group level. Ie get all groups with size > 10000 employees.
Second, get all individual stores related to the above criteria and filter these on country = Sweden
Third, for the selected stores, get all contact persons with hobby = Golf
Is it possible to use segment builder for this kind of segmentation? I know that it is no problem to do segmentation for accounts and contact persons, but in this process there is an additional step involved (the selection of corporate groups, step 1).
For the target group an external file is to be created with info from all thre levels (ie, size, country, hobby).
Thanks for any help
Anna

Hi,
Thanks for the response. I have now created my own InfoSet using BUT000 and BUT050. Having the Store-level in the left hand side and connecting to the Store-group using BUT050 and the store-group attributes in BUT000.
I can now segment using attributes from CRM tables on both Store and Store-group level resulting in a profile containing of relevant stores. From this profile I can now continue my segmentation using marketing attributes and BW queries on the store level. When I have finished my segmentation on stores I generate a target group for the contact persons for the stores and continue the segmentation on the contact person level.
Some questions:
- In the above scenario I can only segment on Info-Set level on the store-groups. Ie I cannot use Marketing attributes or BW queries because of the fact that stores are the base on which I have to build the profile on. Is there any way i can do the following:
1. Build a profile on store-groups using Infoset/BW queries/Marketing attributes
2. Generate a profile using stores as a base  and now continue the segmentation using Infoset/BW queries/Marketing attributes on stores
3. Generate a profile using contact persons and now continue the segmentation using Infoset/BW queries/Marketing attributes on stores
The end result will be a target group with relevant contact persons and the stores.
Another question, as I have understood it a target group is a table of partner guids. There are a set of badis you can enhance to collect attributes for these guids and display in segment builder when yoiu open the target group. In the case of relationships, does the target group contain partner guid for the customer + relationship + partner guid for the contact person??
How does the triggering of the different Badis take place? In which case is the badi for the non-relationship target group called and in which case is the badi for relationship target group called?
thanks for any help in this matter
Br
Anna

Similar Messages

  • Speed up Qmaster's "Merging distributed QuickTime segments" process?

    I'm finding that Qmaster's "Merging distributed QuickTime segments" process takes as long or LONGER than the time it takes to generate the encode itself.
    I found this related discussion but no resolution: http://discussions.apple.com/thread.jspa?threadID=1107862&tstart=56
    There must be some un-optimized or rate-limiting bit of code in qmaster... i can't explain why the "Status: Processing: Merging distributed QuickTime segments" takes so long. We have a Compressor v3 setup with 24-36 virtual cores depending on config. The thing blazes through the encode process, doing a SD movie in about 4-6 minutes. It's beautiful how fast it runs, but when it gets back to merging segments, the disk activity does not exceed 15-20MB/sec read/write.
    I tested my disks on concurrent read/write (this is a simple RAID0 array) and was seeing 80-90MB/sec concurrent read and write on the same volume. There's no knob, switch, or settings file parameter from what I can tell. It's incredibly frustrating for the disk operation to take LONGER than the encode.
    Has anyone been able to overcome this problem? I am contemplating SSDs for the qmaster temp, but I don't think that would speed anything up. Does anyone know if this also exists in Qmaster 3.5?

    Hi all, thanks for the replies thus far.
    Jon Chappell > Compressor Repair is my best friend! Appreciate the links to the other forum with the benchmark results. I'm going to re-create those tests and see how my results compare.
    All nodes are connected via 2Gb ethernet (bonded), so network speeds/transfers between cluster storage, source, and encoders are excellent. During encoding, each node is pulling data in over the network from the centralized storage server at well over 100MB/sec and placing the resulting segment on the qmaster temp area on the controller node.
    The bottleneck is what happens when the controller (an 8 core 2.66Ghz Xserve with 2TB raid0'd drives) stitches the segments back together again. It's painfully slow, around 20MB/sec when I know the disks are capable of about 3-4x that. During this process, no data is being pulled over the network from the encoder nodes, as they've dumped their segments in he cluster storage node's temp area. It's all local disk i/o during the stitch operation.
    Perhaps there's some single-threadedness in that process that prevents qmaster from harnessing all the resources available. All machines in the cluster are identical, so I'm convinced it's something in qmaster's design that may not be user serviceable.

  • BW as Datasource in CRM Segmentation Process

    Hello,
    We have created some reports in BW to use as Datasource in Segmentation in CRM. In order to access these reports in BW while creating DataSource with origin type BW, we need to create a new RFC destination with a BW Dialog user in the RFC.
    Does anyone know what kind of authorization we need for this Dialog user in BW. Thanks in Advance,
    Raj Kasa

    Hello,
    We have created some reports in BW to use as Datasource in Segmentation in CRM. In order to access these reports in BW while creating DataSource with origin type BW, we need to create a new RFC destination with a BW Dialog user in the RFC.
    Does anyone know what kind of authorization we need for this Dialog user in BW. Thanks in Advance,
    Raj Kasa

  • ROLLBACK SEGMENT의 MINEXTENTS를 20 이상으로 하면 좋은 이유

    제품 : ORACLE SERVER
    작성날짜 : 2003-06-19
    ROLLBACK SEGMENT의 MINEXTENTS를 20 이상으로 하면 좋은 이유
    =========================================================
    PURPOSE
    이 자료는 다음과 같은 주제에 대하여 소개하는 자료이다.
    이 문서는 database application의 요구 사항을 충족시키기 위해 고려되어
    져야 할 rollback segment tablespace 구성에 관한 내용을 담고 있다.
    Creating, Optimizing, and Understanding Rollback Segments
    -Rollback Segment 구성과 기록 방식
    -Transaction에 Rollback Segment를 할당하는 Oracle 내부 메커니즘
    -Rollback Segment 크기와 갯수
    -Rollback Segment의 크기와 갯수 결정을 위한 테스트
    -Rollback Segment extent의 크기와 갯수
    -Rollback Segment의 minextents를 20 이상으로 하면 좋은 이유?
    -Rollback Segment의 Optimal storage parameter와 Shrink
    Explanation
    Rollback Segment 구성과 기록 방식
    Rollback segment는 extent라 불리는 연속적인 여러 개의 block으로 구성된다.
    Rollback segment는 ordered circular 방식으로 extent를 쓰게 되는데,
    current extent가 full이 되면 next extent로 옮겨 가며 사용하게 된다.
    Transaction은 rollback segment 내의 current location에 record를 쓴 다음,
    record의 size 만큼 current pointer를 옮겨 간다.
    Rollback segment에 현재 record가 쓰여지고 있는 위치를 "Head"라고 한다.
    또한, "Tail"이란 용어는 rollback segment에서 가장 오래된 active
    transaction record의 시작 위치가 되는 부분을 말한다.
    Transaction에 Rollback Segment를 할당하는 Oracle 내부 메커니즘
    새로운 transaction이 rollback segment 를 요청하면, 각 rollback segment
    를 이용하고 있는 active transaction 갯수를 확인하여 가장 적은 갯수의
    active transaction 을 가진 rollback segment를 할당하게 된다.
    Rollback segment는 transaction load를 처리하기에 충분한 크기를 가져야
    하고, 필요한 만큼의 rollback segment를 사용할 수 있도록 적당한 갯수의
    rollback segment를 가져야 한다.
    1. 한 transaction은 단 하나의 rollback segment만을 사용할 수 있다.
    2. 같은 extent에 여러 transaction이 기록할 수 있다.
    3. Rollback segment의 Head는 Tail에 의해 현재 사용 중인 extent를
    침범하지 않는다.
    4. 링 형태로 구성되어 있는 rollback segment의 extent들은 다음 extent를
    찾을 때 절대 건너 뛰는 일이 없으며, 순서를 뒤바꾸어 사용하지도 않는다.
    5. Head가 next extent를 찾지 못하면, 새로운 extent를 추가로 할당하고,
    그 extent를 링 안에 포함시킨다.
    위와 같은 원리를 감안할 때, transaction size 뿐만 아니라 transaction
    time도 상당히 중요한 고려 사항이라는 것을 알 수 있다.
    Rollback Segment 크기와 갯수
    Rollback segment size가 충분한지 판단하는 기준은 transaction activity에
    직접적으로 영향을 받는다. 주로 일어나는 transaction activity에 근거하여
    rollback segment size를 결정하여야 하고, 잘 일어나지 않는 특수한 경우의
    큰 transaction이 문제라면 별도의 rollback segment로 관리되어야 한다.
    Transaction 발생 중 Head가 너무 빨리 wrap around 시켜서 tail을 catch하
    지 않도록 하여야 하며, 자주 변경되는 data에 대해 long-running query가
    수행되었을 경우 read-consistency가 유지될 수 있도록 rollback segment
    가 wrap around되지 않아야 한다.
    Rollback segment 갯수를 적당히 잡아야 하는 이유는 process들 간에
    contention을 방지하기 위함이고, V$WAITSTAT, V$ROLLSTAT, V$ROLLNAME
    view를 통해서 contention을 확인할 수 있으며, 조회문은 다음과 같다.
    sqlplus system/manager
    select rn.name, (rs.waits/rs.gets) rbs_header_wait_ratio
    from v$rollstat rs, v$rollname rn
    where rs.usn = rn.usn
    order by 1;
    위의 query에 의해 조회된 rbs_header_wait_ratio 가 0.01 보다 크면,
    rollback segment 갯수를 추가한다.
    Rollback Segment의 크기와 갯수 결정을 위한 테스트
    1. Rollback segment tablespace 생성
    2. 테스트하기 위해 생성할 Rollback segment 갯수 결정
    3. 같은 크기의 extent로 rollback segment 생성
    extent 갯수는 최대 확장 시 10 - 30 개 정도가 되도록 extent 크기를 결정
    4. Rollback segment의 minextents는 2이다.
    5. 테스트할 rollback segment와 system rollback segment만 online 상태로 한다.
    6. Transaction을 수행하고, 필요하면 application을 load한다.
    7. Rollback segment contention을 확인한다.
    8. Rollback segment가 최대 얼마까지 확장하는지 모니터링한다.
    Rollback Segment extent의 크기와 갯수
    Rollback segment가 자라나는 최대 사이즈를 알 수 있는데, 이 수치를
    "minimum coverage size"라 한다. 만약, contention이 발생한다면 rollback
    segment 갯수를 늘려 가면 테스트를 반복한다. 또한, extent 갯수가 10개
    미만이나 30개 이상이 될 필요가 있다면 extent 크기를 늘리거나 줄이면서
    테스트를 반복해 나가면 된다.
    Rollback segment의 extent 크기를 정할 때, 각 extent는 모두 같은 크기로
    생성할 것을 recommend한다.
    Rollback tablespace의 크기는 extent size의 배수로 지정한다.
    최적의 성능을 위한 rollback segment의 minextents는 20 이상이어야 한다.
    Rollback Segment의 minextents를 20 이상으로 하면 좋은 이유?
    Rollback segment는 dynamic하게 allocate되고, 더 이상 필요 없게 되었을 때
    (만약, Optimal parameter가 셋팅되어 있으면) 모두 commit된 extent에
    대해서는 optimal size 만큼만 남기고 release(deallocate)된다.
    Rollback segment가 적은 수의 extent를 가질 수록, space 할당/해제 시
    extent 수가 많을 때보다 큰 사이즈의 space가 할당되고, 해제된다.
    다음과 같은 예를 들어 보자.
    200M 정도의 rollback segment가 있는데, 100M 짜리 2개의 extent로 이루어져
    있다고 가정해보자. 이 rollback segment에 추가로 space를 할당해야 할 일이
    생겼을 때, 모든 rollback segment extent는 같은 크기를 가져야 한다는 점을
    감안할 때, 100M 짜리 extent를 하나 더 할당해야 할 것이다.
    이 결과 직전의 rollback segment 크기에 비하여 50% 만큼의 크기 증가분이
    생겨나게 된 것인데, 실제 필요로 하는 space보다 더 많은 space가 할당되었을
    것이다.
    이와 반대로, 10M 짜리 extent 20개로 구성된 200M 짜리 rollback segment를
    생각해보자.
    여기에 추가로 space를 할당해야 할 일이 생겼을 때, 10M 짜리 extent 하나만
    추가되면 되는 것이다.
    Rollback segment가 20개 또는 그 이상의 extent로 구성되어 있다면 extent가
    하나 더 증가할 경우가 생겼을 때, rollback segment의 전체 크기가 5% 이상은
    늘어나지 않는다는 것이다.
    즉, space의 할당과 해제 작업이 보다 유연하고 쉽게 일어날 수 있다.
    요약하면, rollback segment의 extent 갯수를 20 이상으로 잡으면 space
    할당과 해제가 "보다" 수월해진다.
    실제로 extent 갯수를 20 이상으로 잡았을 때, 처리 속도가 훨씬 빨라진다는
    사실이 많은 테스트 결과 밝혀졌다.
    한가지 확실한 사실은, space를 할당하고 해제하는 작업은 cost가 적게 드는
    작업이 아니라는 사실이다.
    실제로 extent가 할당/해제되는 작업이 일어날 때, performance가 저하되는
    일이 발생한다는 것이다.
    Extent 하나에 대한 cost는 별 문제가 안 된다고 할지라도, rollback segment
    는 끊임없이 space를 할당하고 해제하는 작업을 반복하기 때문에 작은 크기의
    extent를 갖는 것이 cost 측면에서 훨씬 효율적이라는 결론이다.
    Rollback Segment의 Optimal storage parameter와 Shrink
    Optimal은 deallocate 시에 rollback segment 내에 optimal size 만큼의
    extents를 유지하기 위해 사용하는 rollback segment storage parameter이다.
    다음과 같은 명령으로 사용한다.
    alter rollback segment r01 storage (optimal 1m);Optimal size는 storage 절 안에서 기술되어야 한다.
    Optimal size 이상이 되면, 모두 commit된 extent에 대해서는 optimal size
    만큼만 남기고 release된다.
    즉, optimal에서 지정한 크기 만큼만 rollback segment를 유지하겠다는
    뜻이며, 일정한 크기로 늘어났다가 다음번 tx이 해당 rbs를 취할 경우
    optimal size만큼 resize하는 option이다.
    rbs의 가장 최근에 사용된 extent가 다 차서 다른 extent를 요구할 때
    이 optimal size와 rbs size를 비교하게 되며, 만약 rbs size가 더 크다면
    active tx에 관여하지 않는 tail extent에 대하여 deallocation이 이루어진다.
    특정 rollback segment가 너무 큰 space를 차지해서 다른 rollback segment가
    extent를 발생할 수 있는 여유 공간을 부족하게 만들기 때문에 이를 극복하기
    위해서 optimal size를 지정할 필요가 있다.
    즉, optimal parameter를 지정하면 space availability 측면에서 효율적이다.
    다음과 같이 shrink 명령을 수행하는데, size를 지정하지 않으면 optimal
    size 만큼 shrink된다.
    alter rollback segment [rbs_name] shrink to [size];Shrink 명령 수행 후, 바로 줄어들지 않는 경우가 있는데,
    transaction이 있는 경우는 줄어들지 않고, transaction이 종료되면 줄어든다.
    Optimal이 적용되는 시간은 session이 빠져 나가고 약 5~10 분 정도 걸린다.
    적당한 OPTIMAL SIZE?
    => 20 ~ 30 extents 정도가 적당한데, batch job의 성격에 따라 size는 달라
    지며 각 optimal의 합이 datafile의 size를 넘어도 전혀 상관없다.
    Optimal size를 initial, next와 같게 주면 extent가 발생하는 매번 shrink가
    일어나므로 좋지 않다.
    RBS들의 평균 크기를 구하여 이것을 optimal 크기로 지정하여 사용하는 것을
    권한다.
    다음의 query를 이용하여 peak time에 rollback segment들의 평균 크기를 구한다.
    select initial_extent + next_extent * (extents-1) "Rollback_size", extents
    from dba_segments
    where segment_type ='ROLLBACK';
    이 크기의 평균값(bytes)을 rollback segment들의 optimal size로 사용할 수
    있다.
    주의할 사항은 너무 자주 shrink된다거나 optimal 값을 너무 작게 주면
    ora-1555 : snapshot too old error가 발생할 확률이 높아지므로,
    사용하지 않는 것이 좋을 수도 있고, 되도록 큰 값으로 셋팅해야 한다.
    Rollback segment의 optimal size를 확인할 수 있는 view는 V$ROLLSTAT
    이라는 dynamic view로서 OPTSIZE column에서 확인이 가능하다.
    Example
    none
    Reference Documents
    <Note:69464.1>

  • Sending marketing campaigns to contact persons - How?

    Hey guys,
    what I want to do is to use the functionality in the web ui, where you can build out of your target group (which includes organizational accounts) another target group (which includes the contact persons of the organizational accounts of the first target group). I think I almost figured it out, with the help of you, how this can achieved but please send me a reply to verify or correct my assumptions about the process.
    1. In case I want to search for attributes of contact persons (who belong to an organization) I have to set up the following:
    I used the standard infoset "CRM_MKTTG_BP_ORG" (BP: Contact persons relations of organization), created a Data source of this. I created an attribute list of this data source and I created the filter for the fields "BP category" (as value = organization) and filter "BP relationship category" ( as value = has contact person ).
    SAP told me to create those two filters. I do not really get here why I have to set those two filters. I got the same results by using the combination 'has contact person AND organization AND filter X' or 'has contact person AND filter X' or 'organization AND filter X'. Please explain to me the differences!. Please explain to me the differences!!!!
    My result is that I get all contact persons with this attribute I was searching for right? So am I right, that this option helps me to find all contact persons who belong to a certain attributes but NOT filters all organizations with a certain attribute and then gives me the associated list with contact persons for it?
    2. In case I want to search for attributes attached to organizations and then to give me a list with the contact persons of those organizations I have to set up the following:
    a) For an Attribute Set
    If I want to use attribute set of an organization, for my data source, I should use
    Segmentation object - business partner
    Origin type - attribute set
    Attribute set - Your custom attribute set for Org
    Function module - CRM_MKTTG_PF_BP_TAB_TO_CP
    Create an attribute list based on the data source created. This will also give the list of contact persons for organizations, as a result of segmentation process.
    b) For an Info Set
    If I want to use an Info Set instead of an Attribute Set I have to do almost the same
    Segmentation object - business partner
    Origin type u2013 Info Set
    Attribute set u2013 Info Set X
    Function module - CRM_MKTTG_PF_BP_TAB_TO_CP
    Create an attribute list based on the data source created. This will also give the list of contact persons for organizations, as a result of segmentation process.
    Did I understand the process correct?? Please correct me when I got anything wrong, especially with the filters and the AND combination of the first option.
    3. I heard something as well about a segmentation basis. *How would I do the segmentation based on the segmentation basis?* Would I need t choose valid for Segment Member Relationships? What does this mean and how exactly would I need to build my segmentation basis? What result would I get?
    I created two segmentation basis, one which only includes organizations and one which only includes persons. So in case I would choose "valid for Segment Member Relationships" and not "segmentation members" I would additionally to, lets say all organizations, get contacts which have the attributes of the profil I want to create?
    But I cannot use "valid for Segment Member Relationships" if I want to have ONLY the contacts of organizations and persons right? It is nothing similar as the option with including the function module in the attribute set or info set and then I get the contacts based on a target group of organizations right?
    Thanks for your help.
    Best regards,
    Janine

    Hey guys,
    what I want to do is to use the functionality in the web ui, where you can build out of your target group (which includes organizational accounts) another target group (which includes the contact persons of the organizational accounts of the first target group). I think I almost figured it out, with the help of you, how this can achieved but please send me a reply to verify or correct my assumptions about the process.
    1. In case I want to search for attributes of contact persons (who belong to an organization) I have to set up the following:
    I used the standard infoset "CRM_MKTTG_BP_ORG" (BP: Contact persons relations of organization), created a Data source of this. I created an attribute list of this data source and I created the filter for the fields "BP category" (as value = organization) and filter "BP relationship category" ( as value = has contact person ).
    SAP told me to create those two filters. I do not really get here why I have to set those two filters. I got the same results by using the combination 'has contact person AND organization AND filter X' or 'has contact person AND filter X' or 'organization AND filter X'. Please explain to me the differences!. Please explain to me the differences!!!!
    My result is that I get all contact persons with this attribute I was searching for right? So am I right, that this option helps me to find all contact persons who belong to a certain attributes but NOT filters all organizations with a certain attribute and then gives me the associated list with contact persons for it?
    2. In case I want to search for attributes attached to organizations and then to give me a list with the contact persons of those organizations I have to set up the following:
    a) For an Attribute Set
    If I want to use attribute set of an organization, for my data source, I should use
    Segmentation object - business partner
    Origin type - attribute set
    Attribute set - Your custom attribute set for Org
    Function module - CRM_MKTTG_PF_BP_TAB_TO_CP
    Create an attribute list based on the data source created. This will also give the list of contact persons for organizations, as a result of segmentation process.
    b) For an Info Set
    If I want to use an Info Set instead of an Attribute Set I have to do almost the same
    Segmentation object - business partner
    Origin type u2013 Info Set
    Attribute set u2013 Info Set X
    Function module - CRM_MKTTG_PF_BP_TAB_TO_CP
    Create an attribute list based on the data source created. This will also give the list of contact persons for organizations, as a result of segmentation process.
    Did I understand the process correct?? Please correct me when I got anything wrong, especially with the filters and the AND combination of the first option.
    3. I heard something as well about a segmentation basis. *How would I do the segmentation based on the segmentation basis?* Would I need t choose valid for Segment Member Relationships? What does this mean and how exactly would I need to build my segmentation basis? What result would I get?
    I created two segmentation basis, one which only includes organizations and one which only includes persons. So in case I would choose "valid for Segment Member Relationships" and not "segmentation members" I would additionally to, lets say all organizations, get contacts which have the attributes of the profil I want to create?
    But I cannot use "valid for Segment Member Relationships" if I want to have ONLY the contacts of organizations and persons right? It is nothing similar as the option with including the function module in the attribute set or info set and then I get the contacts based on a target group of organizations right?
    Thanks for your help.
    Best regards,
    Janine

  • How to write a Data Plugin to access a binary file

    hi
    Im a newbee to DIAdem, i want to develop a data plugin to access a binary file with any number of channels.For example if there around 70 channels, the raw data would in x number of files which will contain may be around 20 channels in each file . Raw file consist of header(one per file), channel sub header(one per channel),Calibration Data Segment(unprocessed datas) and Test data segments(processed data)....
    Each of these contains many different fields under them and their size varies ....
    Could suggest me some procedure to carry out this task taking into consideration of any number of channels and any number of fields under them....
    Expecting your response....
    Jhon

    Jhon,
    I am working on a collection of useful examples and hints for DataPlugin development. This document and the DataPlugin examples are still in a early draft phase. Still I thought it could be helpful for you to look at it.
    I have added an example file format which is similar to what you described. It's referred to as Example_1. Let me know whether this is helpful ...
    Andreas
    Attachments:
    Example_1.zip ‏153 KB

  • How to write enhancement logic for AR datasource

    I need to enhance the AR data source(0FI_AR_4) with the following logic:
    LOGIC:
    First find out the KB Partner function in the field: KNVP-PARVW. Once the KB value is obtained, then display the value from KNVP-PERNR
    Thanks..
    Edited by: Jim kim on Jul 25, 2011 10:41 PM
    Moderator message : Spec dumping not allowed. Thread locked.
    Edited by: Vinod Kumar on Jul 26, 2011 10:18 AM

    Hi Ram,
    Generally the IDoc user exit is called at the following places:
      1) When the control record is read.
      2) After each and every segment in the data record
      3) At the end of the data segment processing.
    The IDoc user exit interface generally imports IDOC_DATA (data record internal table) table. Now the data records in the internal table should appear in the same order as maintained while defining IDoc structure (WE30 transaction). For SAP standard segment SAP code will take care of this. For extended segment you will have to take care of this aspect by appending the Z-segment in the IDOC_DATA table.
    You can do this by:
             looping at IDOC_DATA table:
                 - Do a case-endcase fo IDOC_DATA-SEGNAM (This stores the segment 
                   structure as per the hierarchy).
                 - Within the case for "Z-segment" you can write the logic for appending
                   the Z-segment to IDOC_DATA-SDATA.
    Hope this gives some clue.
    Regards,
    Gajendra.

  • Inbound IDOC FINSTA01 status Change

    Hi,
    Actually we are posting in bound idco for lockbox using Idoc Finsta01,Depending on the header and item amount ,ie if both are not equal  need to post the idoc with 64  status with custome message.
    Any User Exit ?
    Thanks,
    Madhu

    Hi,
    Please check these user exits for IDOC_INPUT_LOCKBX.
    EXIT_SAPLIEDP_201 (for Settlement Handling)
    EXIT_SAPLIEDP_202 (for Segment Processing)
    EXIT_SAPLIEDP_203 (for Changes to Payment Advice)
    Again, you need to fill up structure parameter FIMSG with your custom message and raise an exception.
    Regards,
    Ferry Lianto
    Please reward points if  helpful.

  • Compressor doesn't work anymore....

    Hello,
    I got a really strange bug,
    I can not use Compressor anymore...
    I can't see the "project" window where there is all my video-sequence...
    I can only see these windows :
    • "Settings / Destinations"
    • "Inspector"
    • "Preview" (but the preview window looks very buggy, i think...
    • "History"
    So I can't add any new video to compress/export... and I can add any setting to export the videos...
    I try 2 times, to errase all the Compressor Applications and application support and prefs..
    and try to reinstall from the original DVD.
    but it don't work again and again....
    PLEASE HELP !!!
    now, I don't know what to do...
    Perhaps I didn't erase & re-install the applications and compressor files properly,
    but, CAN ANYONE HELP ME PLEASE ?!!
    Thanks in advance.

    keyman: Try following the steps Compressor Zealot links to. Read the instructions carefully, and make sure you remove all the Compressor/Qmaster files before reinstalling.
    Eyebite wrote:
    don't plan on using Virtual Clustering. As far as I can determine, no one has that working yet.
    Uhmmm... Virtual Clustering works fine, but you can not use virtual clustering when sending your sequence directly from FCP to Compressor.
    Compressor: Don't export from Final Cut Pro using a Virtual Cluster
    To transcode your FCP-sequence using Virtual Clustering with Compressor 3, you can export your sequence as a self-contained ProRes file, and then bring that file into Compressor.
    Keep in mind that job segmenting is not always good for compression.
    Job Segmenting and Two-Pass (or Multi-Pass) Encoding
    If you choose the two-pass or the multi-pass mode, and you have distributed processing enabled, you may have to make a choice between speedier processing and ensuring the highest possible quality.
    The Apple Qmaster distributed processing system speeds up processing by distributing work to multiple processing nodes (computers). One way it does this is by dividing up the total amount of frames in a job into smaller segments. Each of the processing computers then works on a different segment. Since the nodes are working in parallel, the job is finished sooner than it would have been on a single computer. But with two-pass VBR and multi-pass encoding, each segment is treated individually so the bit-rate allocation generated in the first pass for any one segment does not include information from the segments processed on other computers. First, evaluate the encoding difficulty (complexity) of your source media. Then, decide whether or not to allow job segmenting (with the “Allow Job Segmenting” checkbox at the top of the Encoder pane). If the distribution of simple and complex areas of the media is similar throughout the whole source media file, then you can get the same quality whether segmenting is turned on or not. In that case, it makes sense to allow segmenting to speed up the processing time.
    However, you may have a source media file with an uneven distribution of complex scenes. For example, suppose you have a 2-hour sports program in which the first hour is the pregame show with relatively static talking heads, and the second hour is high-action sports footage. If this source media were evenly split into 2 segments, the bit rate allocation plan for the first segment would not be able to “donate” some of its bits to the second segment because the segments would be processed on separate computers. The quality of the more complex action footage in the second segment would suffer. In this case, if your goal were ensuring the highest possible quality over the entire 2-hour program, it would make sense to not allow job segmenting by deselecting the checkbox at the top of the Encoder pane. This forces the job (and therefore, the bit-rate allocation) to be processed on a single computer.
    Note: The “Allow Job Segmenting” checkbox only affects the segmenting of individual jobs (source files). If you are submitting batches with multiple jobs, the distributed processing system will continue to speed up processing by distributing (unsegmented) jobs, even with job segmenting turned off.
    From the Compressor User Manual

  • Send WP_PLU03 from WPMA ...

    Hi all,
    I have a requirement like this.
    If any article is created from WPMA transaction. WP_PLU03 idoc should generate and trigger to external system ( XI ) automatically. i can able to create and send through we19,but i want it from WPMA transaction automatically.
    Thanks in advance,
    Ramesh.

    Hi Michal,
    Actually i didn't get what excatly you said,
    but , once run the programe from se38 for
    <b>Initialization (RWDPOSIN)</b>
    i have get the follwing output.
    Processing Status  : 1
    Recipient: HYD1
      IDocs created
         Merchandise cate IDoc no. 0000000000200175structure  WPDWGR01with  000048 segments created
         Set assignments: IDoc no. 0000000000200176structure  WPDSET01with  000003 segments created
         Exchange rates:  IDoc no. 0000000000200177structure  WPDCUR01with  000003 segments created
         Trigger file for status ID: 00000000000138
         does not have to be created       No messages needed
    for  <b>Change message (RWDPOSUP)</b> out put is
    Preparation statistics
    Recipient: HYD1       A total of                  0  change documents were checked
      Analysis and preparation
      IDocs created
         Exchange rates:  IDoc no. 0000000000200178structure  WPDCUR01with  000003 segments created
         Trigger file for status ID: 00000000000139
         does not have to be created       No messages needed
    Overall Statistics of Processing
    Total No. of All Stores Processed:                            1
    Total Runtime of the Processing:                              1  Seconds
    Total No. of All Segments Processed:                          3
    Average Segment Throughput:                               0.333  Sec. per Segment
    Hi Gourav Khare,
    Do i need to maintain Distributuion model for my purpose?
    ANY FURTHER help will be appriciated.
    Regards,
    Ramesh.

  • Qadmin cluster setup

    Hi - I have an iMac and a macbook, both running 10.7.3, same version of Compressor 4, fcpx, and M5.  I am trying to set up a cluster with both these machines, with the laptop as cluster controller and the imac to do heavy lifting.  the qmaster service browser on the book shows both machines with all their service nodes present and accounted for, but greyed out - I can't move them to a cluster controller definition.  I got it to work once when setting the imac to cluster controller, then that stopped also.  Also, when I set up a cluster on the imac, I was able to allocated book services to the cluster, but the jobs never took advantage of them.  Is there some sort of comm link setup I am missing?
    Please help.
    .... clueless

    No, we're singing off the same sheet.....
    I have another clustering tidbit.  If you embed videos in Motion 5, as one is apt to do, via linked media, and submit this job to a cluster set up according to the Matt Hughes arrangement, the service nodes drop the linked videos (substituting a filled alpha of sorts), and the resulting rendering will not feature said items.  It appears that linked media of that nature is not made available to the service node to do its work.  And if it is intended to do so, but probibited by some Lion rights protection feature, the 'successful' outcome of the Qmaster job status is masking any such trouble.  I know this because I checked all the segments processed by the nodes vs cluster controller (where the media resides) and - yep - rendering failures are only on the nodes for linked media.
    Ergo: any complex Motion products are best processed in situ where all the linked media resides.  My guess is its just more NFS trouble, but the fact that Qmaster tells you that everything is ok means there are fundamental protocol failures.

  • Increasing Processor allocation / speed for encoding / burning Blu Rays

    I have several large HDV QT movie files that I have exported from FCP 6.06 and I am burning to BD-R discs using Toast 10.2 and my Lacie external Blu Ray burner. Since Toast uses H.264 it takes quite a while for my PowerBook G4 running Leopard 10.5.8 to encode, then burn the discs. Is there someway to increase the amount of processing power and speed by devoting more than one parallel processor to help with the task? thru Q-Master or something? I don't need to be doing ANYTHING else with my Mac during this time consuming process!

    Of course a single movie would be broken up to compress in smaller parts. It's called segmented processing. It's what Qmaster does when you set up a distributed processing network. But you need more than one computer to make it worth while, and they have to be networked on, minimum, a gigabit network, but preferably on a faster Fibre network.
    The problem that comes with job segmenting is if you use a variable bit rate, because the bit rate information isn't shared between/amongst the nodes. If you use a CBR, you're all set, but like I said, it won't matter unless you have more than one computer, and you're limited by the slowest computer in the chain, as well as the network.
    If you use less than a gigabit connection, the time it takes to transfer the information to and from the cluster controller, and then reconstitute the final video is more than the time it would take to just do it on a single computer.

  • CS4 for burning Blu-rays?

    I'm thinking about upgrading from CS3 to CS4 and was wondering if the Encore CS4 was more reliable for burning blu-rays than CS3. I have been receiving all sorts of errors that have been driving me nuts.
    I'm using a Matrox Axio LE with windows XP.

    Of course a single movie would be broken up to compress in smaller parts. It's called segmented processing. It's what Qmaster does when you set up a distributed processing network. But you need more than one computer to make it worth while, and they have to be networked on, minimum, a gigabit network, but preferably on a faster Fibre network.
    The problem that comes with job segmenting is if you use a variable bit rate, because the bit rate information isn't shared between/amongst the nodes. If you use a CBR, you're all set, but like I said, it won't matter unless you have more than one computer, and you're limited by the slowest computer in the chain, as well as the network.
    If you use less than a gigabit connection, the time it takes to transfer the information to and from the cluster controller, and then reconstitute the final video is more than the time it would take to just do it on a single computer.

  • CM02 Transaction is showing Capacity requirements in Hours

    HI Guys,
    I have a question in regards to CM02 transaction, Plz help me finding out what is wrong with it  let me walk you through the step which I take
    CM02 > Workcenter name> Plant
    Click on Planning (Upper Left Corner) => Profiles > Option Profile and select SAPB023 which is Weeks (78)
    Click on Standard  Overview Tab (upper left Corner)
    Select any one the week and hit Cap. details/Period
    My problem begins here
    IN Rem.proc column(Remaining capacity rqmts.for operation segment "Processing) it is showing as Minutes, however in Reqmnts (Capacity requirements) it is showing the same value as in Hour Unit with the same number for instance if the number is 30 min and Reqmnts column it shows as in hour.
    Why it is showing Unit of Measure in Hours rather then Minutes and secondly, where this value is coming from.
    Thanks In Advance.

    CM02 > Workcenter name> Plant
    Click on Planning (Upper Left Corner) => Profiles > Option Profile and select SAPB023 which is Weeks (78)
    Click on Standard Overview Tab (upper left Corner)
    Now before you do the below step ,Go to Setting > General > Here you will geta Screen POP up for Settings: VAlue display
    And here there is check box for the unit of measure for capacity
    activate the check box.- and then follow the below stept. F1 details of this check box is self explanatory
    and now you do the below step
    Select any one the week and hit Cap. details/Period
    Kinldy let me know the feedback'
    reg
    dsk

  • External data enrichment Provider

    Hi Experts,
    Can anyone list the "External data enrichment providers", who does the cleaning and parsing of data out of MDM and sends it to MDM.
    Kind Regards
    Eva

    Hi,
       Informatics Outsourcing is the Offshore Data Enrichment Service Company.
    Data Enrichment process will assist in,
      u2022     Adding important missing details such as first names, date of birth, that allows you to optimize locating activities or personalize and target in the case of promotional campaigns
    u2022     Adding telephone numbers or XD flags
    u2022     Profiling consumer types using consumer segmentation process. This will allow marketing campaigns and can emphasize potential lucrative consumers
    u2022     Attaching description and value to the addresses for all the profiles
    For more information visit "Informatics Outsourcing" from Google search.

Maybe you are looking for

  • T430u and Windows 8 Problems

    Hi everyone, Really need some helps here. I bought a T430u , and to be honest, i'm rather dissapointed with this. My keyboard was dead in a matter of weeks, so i've just get it back from the sevice centre. And now I have another problems: 1.Cannot wa

  • Elements 9 organizer won't load on mac

    I am running a mac book pro with snow leopard. I have been using elements organizer for about 3 months now. Since yesterday however, every time I try to run the program, it crashes. This happened after running organizer and I had to do a force quit a

  • Photoshop cc doesn't work, stop working after 2 second.

    Photoshop cc doesn't work, stop working after 2 second, witout any errors. I don't have problem with lightroom. I work on windows 7 64bit. how to fix this problem?

  • Problem in transporting details of wage elements of pe51 to quality server.

    Hi Experts. Here i have added some new wage elements in the windows of pe51 in Development sever. They have added properly in the Development server. But when im trying to transport the same to quality server,no details are going to the quality serve

  • Adobe Indesign randomly crashing

    Hi Community and Adobe Team, I do have a problem with Adobe Indesign CC that the software is randomly crashing. In most cases the used file still works after opening it again but in rare cases the whole file was corrupted. Here is some information ab