CRM's High Volume Segmentation with HANA without SAP BW.

Hi,
I'd like a clarification about the possibility of using into our CRM's High Volume Segmentatation, as Attribute List, fields and information not found on CRM system but present on HANA system and without the availability of a BW system.
I'd like to use transactional data, such as Orders and Invoices extracted from SAP ECC which are uploaded to HANA and there used for analytical purpose in making BO's report.
I should need to use these information in our CRM's Segment Builder to combine these selections with typical data and marketing attributes yet present on CRM system.
I'd like to know first af all if  is it  possible this kind of Segmentation?...and then,  has someone  ever made an experience of this type? Can I get some specific documentation?
Thanks a lot in advanced for your help.
Best regards
Carlo Ferro

Hi
The best option I think for you is to run your queries in BW
Have these created as Infoset Queries such that CRM can then pull then as a Target Group.
This would be about your only option I believe
Regards
Panduranga

Similar Messages

  • How to prepare high volume segmentation on crm and trex 7.1

    Hi,
    I have trex 7.1 connected to our crm system.
    I have set up esh on the web ui.
    Now I would like to use high volume segmentation.
    So therefore i have created a datasource in crmd_mktds on both an "Attribute Set" and InfoSet.
    Then I have created attribute list for high volume segmentation.
    But I can't see the columns indicating that fast find is in use.
    Our CRM system is:
    SAP_ABA = SAPKA70106
    kr
    Michael Wolff
    Update:
    I solved this.
    I forgot to define RFC destination for TREX index and fast find under marketing -> segmentation.
    So now everything works
    Edited by: Michael W. Knudsen on Oct 21, 2010 6:59 AM
    Edited by: Michael W. Knudsen on Oct 21, 2010 7:00 AM

    Did the missing cutomizing, thx Willie for directing me in the correct direction

  • Error in High Volume Campaign

    Hello Community,
    I am trying to start a high volume campaign with the Marketing Manager role but it wont work. I start by setting up a segmentation model with usage "High Volume Segmentation" and start to define my target group with the modeler. After i save everything and turn back to the initial Campaign editing screen, to start the Campaign, the Job is scheduled and directly canceled. When I click on the Job Status link  (which is "Job canceled") in the Column "Job Status" I get the Job overview. The Error Message I get is
    Renumbering of target groups failed
    This Error Message only appears, when I want to create a High Volume Segmentation. Ordinary Campaigns, without the new modeler, are running trough without any problems.
    I am really struggling with this Error and it is driving me crazy! Can anyone help me?
    Thank you very much in advance!
    Greetz
    Ole

    Hello Willie,
    the only error message I get is "Renumbering of target groups failed". In the Execution Monitor the Systems gives the following feedback:
    (ok checkbox) High-volume execution job created; job name = JOBNAME123 = 17220900 user = XYZ123
    (error checkbox)  Renumbering of target groups failed
    (ok checkbox) High-volume marketing execution job status set to 'Canceled'
    Do you know if I can have a more detailed look at this job in SAP CRM? I might be able to give some more information on this error message if I can have "a look behind the scenes".
    Thanks
    Regards,
    Ole

  • Bank Communication Management without SAP PI?

    Hello @ll,
    i've implement BCM in my Backend-System and it works great with SAP Process Integration.
    Now there is one question.
    Can i work with BCM without SAP PI and therefore use IBM WebSphere Middelware?
    Have anyone experience?
    Thanks a lot.
    regards
    armin

    Looks as if noone has experience with this kind of BCM implementation. Or another option is that it just can not be done.

  • High volume of batches with Split valuation - impact on system performance

    Hi!
    I have a client that is intending to load a new material type from their legacy system which will be automatically batch managed with split valuation.  So, Valuation category will be 'x' and the valuation type will also be the batch number as automatically created on GR.
    The concern of the client is the impact on system performance.  Having up to 80,000 batches per material master record (so, 80,000 valuation types will be mainatined with a unique price in the Accounting 1 tab of the MMR) and overall around 1 million batches a year.  I'm not aware of any system performance issues around this myself but there seems to be anecdotal evidence that SAP has advised against using this functionality with high volumes of batches. 
    Could you please let me know of any potential problems I might encounter, having 1 million batches with split valuation may cause?  Logically, this would increase to tens of millions of batches over time until archived off via SARA.
    Many thanks!
    Anthony

    I currently have about 1.5 million batches with split valuation in my system (but it is not the X split), and we archive yearly.
    having many batches for one material ( lets say 1000) causes dramatic performace issues during automatic batch determination.
    it took about 5 minutes until a batch was returned into a delivery. if the user then wants a different batch and has to carry out batch determination again, then he just works for 10 to 15 minutes on one delivery.
    This is mainly caused by the storage location segment of the batches. if one batch gets movedd within a plant thru 3 different locations, then the batch has 3 records in table MCHB. But SAP has a report to reorganize the MCHB table that have zero stock.
    The X split has more effect, it is not only the batch table that makes issues in this case. With the x-split SAP adds an MBEW record (material master valuation view) for each new batch.
    However, if the design is made to get a certain functionality (here valution at batch level),   then you have to get a proper hardware in place that can give you the performance that is needed.

  • HT6154 HI , I am still under cotract with my iphone 5 and i can't hear anything even though i have my phone set to high volume

    i am using iphone 5 and still under contract with ATT services. the phone has suddenly some technical issue. when i call some one or some one calls me, if i i turn on th emusic anything that hass to do with listening to the device, it doesnot work
    i hear very very low voice eventhough it is set to high volume. will ATT will provide me free support for the phone?

    Only ATT can answer that.

  • How outbound IDoc with Z segment is creating without using a program/FM?

    Hi,
    I am having an outbound IDoc with Z message type and segments.
    I need to find how this IDoc/ IDoc segment is getting created. I tried to find
    it using the segment’s where used list (in SE11- IDoc segment structure). But it
    is showing that the structure and fields are not used anywhere.
    How it is possible to create a Z segment in IDoc without
    using the corresponding structure in some program or function module?
    I have searched in google/scn for a solution, but didn’t find
    anything.
    Any one please helps me to find how this IDoc segment is
    populating?
    Regards,
    Dipin

    Hi Arthur,
    This IDoc is catering as part of a flow (Tcode IW21). I have
    set a break point in the function module ALE_IDOCS_CREATE and processed the transaction
    IW21, but the program didn’t stop in the break point, but the IDoc got created.
    Regards,
    Dipin

  • Oracle database integration with SAP PI for high volume & Complex Structure

    Hi
    We have requirement for integrating oracle database to SAP PI 7.0 for sending data which is eventually transferred to multiple receivers. The involved data structure is hugely complex (around 18 child tables) with high volume processing requirement (100K+ objects need to be processed in 6-7 hours). We need to implement logic for prioritizing the object i.e. high priority objects must be processed first and then objects with normal priority.
    We could think of implementing this kind of logic in database procedures (at least it provides flexibility for implementing data selection logic as well as processed data can be marked as success in the same SP) but since PI sender adapter doesn't support calling Oracle stored procedures currently so this option is rules out. we can try implementing complex data selection using oracle table function but table function doesn't allow any SQL query which changes data (UPDATE, INSERT, DELETE etc) so it is impossible to mark selected objects in table function from PI communication channel "Update Query" option.
    Also, we need to make sure that we are not processing all the objects at once as message size for 20 objects can vary from 100 KB to 15 MB which could really lead to serious performance issues for bigger messages.
    Please share any implementation experience for handling issues:
    1 - Database Integration involving Oracle at sender side
    2 - Complex Data structures
    3 - High Volume Processing
    4 - Controlled data selection from database to contro the message size in PI
    Thanks,
    Panchdev

    Hi,
          We can call the stored procedure using receiver adapter using ccBPM, we can follow different approaches for reading the data in this case.
    a) In this  a ccBPM instance needs to be triggered using some dummy message, after receiving this message the ccBPM can make  a sync call to the Oracle database the store procedure(this can be done using the specific receiver data type strucure), on getting the response message the ccBPM  can then proceed with the further steps.The stored procedure needs to be optimized for improving the performance as the mapping complexity will largely get affected by the structure in which the stored procedure returns the message.Prioritization of the objects can be handled in the stored procedure.
    b) In this a ccBPM instance can first read data from the header level table, then it can make subsequent sync calls to Oracle tables for reading data from the child tables.This approach is less suitable for this interface as the number child tables is big.
    Pravesh.

  • Is there an translate APPID with high volume

    I am using TranslateApis and I encounter some exceptions like: 
    TranslateApiException: AppId is over the quota : ID=1035.V2_Soap.Detect.30AB79E9
    or
    TranslateApiException: IP is over the quota
    Seems I am calling the service to fast. So I am wondering if I can get an APPID with high volume to call. It is fine if I need to paid for it. Thanks.

    Thank you for your question
    There are service limits in place to allow for fairness among all our users:
    You are currently able to translate a maximum of 10000 characters per request, and we recommend keeping each request between 2000 and 5000 characters to optimize response times.  
    The hourly limit is 20 million characters, the daily limit is 480 million characters.
    There is no limit to the number of requests per minute. 
    The Translator API is available through Windows Azure Marketplace (www.aka.ms/TranslatorADM) as a monthly subscription model.  For all paid tiers, you can choose to enable the Auto-refill feature, which
    allows Marketplace to automatically re-subscribe you to the same monthly tier if you prematurely exhaust your monthly volume limit.
    Thanks,
    Tanvi Surti
    Program Manager, Microsoft Translator
    Microsoft Translator team - www.microsoft.com/Translator

  • Use of logbook or PM in high volume context

    Hello,
    is "logbook" suitable for managing big volume of data (i.e. 3 to 10 millions of events like measurements per day) ?
    An idea could be to create a log entry per event of this type and only log measurement(s) when relevant and necesary ?
    Any insight on use of classical "measurement document" in PM (without logbook) is also welcome
    Kind Regards
    Eric

    Hello Narasimhan,
    Thanks for your feedback
    We need to collect events related to Fleet (truck) activity : Kilometer, taxes to be collected, location (etc .)
    Information needs to be checked (Truck N°) and status managed
    Notifications have to be sent according to rumes
    Then a weekly / monthy sum-up is done to be passed to several CRM systems (non SAP)
    I see 3 possibilities :
    . classical use of PM measurement documents (to be extended with additional info, in addition to Kilometer)
    . use of log book
    . use of IS-U counters
    Context is high volume : 3 to 10 millions events per day
    This information needs to be archived and retrieved when necessary
    This information needs to be on-line for around 3 months
    Any suggestion will be welcome
    Kind Regards
    Eric

  • Secondary Index with or without MANDT field

    HI ABAP Guru's,
    What is necessary to add field MANDT while creation of secodary index in DBS.
    But i some body my superiors challanged to me with out using MANDT our secoday index won't works.
    But i tested few scenarios i am not get differnce.
    please advice me exactly which scenarios it is mandatory.
    Below are the Time taken with deffrent scnarios i created one test program to get the time with secondary index with out secodary index secondary index with mndat fiels
    **&with out creation of seconday index
    *1st time -57,103,681
    2nd Time-55,388,294
    **before creation of seconday index with out mandt
    I1st time execution-324,119
    2nd time progrm execution--391,134
    3rd time progrm execution-327,046
    4th time progrm execution336,774
    5th time progrm execution359,100
    6t  time progrm execution-328,027
    *before creation of seconday index with mandtiI1st time execution-367,623
    2nd time progrm execution365,139
    3rd-352,328
    4th-369,122
    5th-352,236
    6th380,590
    7th466,810
    Thanks In Advance,
    Kandula.
    Edited by: Thomas Zloch on Nov 18, 2011 1:08 PM

    Vishnu Tallapragada wrote:
    So if you are maintaining multiple client data on the same database, then not adding MANDT to index will have undesirable effects as any select based on secondary index may return records that are not belonging to this client and deletes and additions on the index from multiple clients will lead to data integrity issues.
    Wrong!
    WHERE clause decides about data being selected, deleted or what-ever.
    Index decides only about HOW data is accessed (if used), not WHAT data is accessed.
    If your database returns a different result depending on the indexdefinition,
    you should log a call at your DB vendor immediately, because this is a bug.
    In general, as the client has usually only a small number of distinct values, it is not a good field,
    to convince the database, that this index is a good idea. But on high volume tables it can be very selective
    as far as the number of result records is concerned (might cut down 50% when 2 clients!).
    In addition it is a very short field, so it should not cost much storage (esp. when compressable).
    Szenario:
    MANDT+IDX-Field with two clients and lets say 5000 record per client (so that idx access will be interesting),
    assuming a given IDX value will return 50 records (25 in each client).
    So the select will be
    ... WHERE MANDT=sy-mandt AND IDX=value
    Accesing the index with only IDX will result in stepping down the index-tree (say 3 blocks) and then reading leafblocks
    to get the 50 hits for IDX-value (assuming 30 records per leafblock -> 2 leafblocks required to gret the 50 records)
    Right now you have accessed 5 blocks to get the address of 50 records that still need to be checked against MANDT.
    So there is need to get 50 blocks (may be less, depending on clustering) to do a filter on MANDT
    and get the final 25 records for the result.
    If you put the MANDT field into the index it might require more space, so that we assume 20 records per leaf block now.
    But since you can now filter MANDT already on the index blocks, you will again only need to get 5 blocks and
    have the adress of the required 25 target records.
    So getting the result is 55 blockinspections without MANDT in index and 30 blockinspections with MANDT in index in this case.
    Now you can start pushing around values and statistics an calculate at what amount of data and average
    size of resultsets it becomes right or wrong to include MANDT. It may turn out both ways, allthough I think
    with MANDT being small, it is usually loss of brain cycles to calculate around for this.
    You simply include it, it will cost only little space and it will never be wrong.
    You leave it out, you will gain little space, but might end up with performance loss.
    If you have only one client in the system, you can safely go with the saving space strategie, as long as you do not need
    a UNIQUE secondary index.
    Volker

  • T7900 Speakers cutting out at high volume

    Help! I'm having a Party tomorrow and everytime I turned my speakers higher than about 50% they cut out, so that the volume drops until the next beat hits, then it cuts out again. I am using the original power supply, and I am sure that the connections are ok because the problem started recently.
    I did have a problem with the pendant that has the volume and bass control on it, in that there was a bad connection when tyou turned the volume dial so that it crackled and the green light dimmed, but that cleared after a little use. The green light stays on with the current fault.
    I am using onboard sound (AC97) on my MSI 875P Neo motherboard, and i haven't altered the drivers since before it worked.
    I'm running out of time before we kick off tomorrow so any help you can offer please post.
    Otherwise its going to be a very quiet party
    also i don't know why there are no paragraphs when i preview this post. Go figure

    Hello guys, I faced a similar problem with the t7900...
    when I turn up the volume from my pc... without touching anything else the speakers begins to power off and then back on,, and one time i found it powering off and on again without playing any music, that terrifed me to death.. as I was sleeping. and at last I began to check wires and just move it and I found it doing the same thing.. I found that the fuse that is located at the end of the adapter cable( the thick part located before the jack that is connected to the subw.) is not properly connected to the wire that doesn't permit high power to pass through it that is needed for high volumes so it begins to poweroff. I fixed it's and the problem never repeated again.. Hoping this to help you.

  • Volume Issues with Full Orchestra

    This is my first full orchestra piece digitally - been able to produce quite a few smaller chamber works no problem, but can't get the volume up on this without blowing into the red zone on the Output 1-2. In order to keep the peaks under it, the whole thing has to be bounced at to low of a volume. All my chamber pieces are higher volume on my ouput equipment than this thing.
    Help?

    I know a lot of movie scores are compressed and limited, but that's because they are composed to support the movie and not "get in the way", but I really don't want to go that way. I'm writing orchestral music and want the full dynamic range.
    I've written a lot for orchestra (but not digitally) and am confident I'm not writing things that are compositionally thick/heavy and over-scored. So I guess that leaves "MIDI-rendering", and I'm baffled as to how this one element could keep me from red-zoning. I've tried dropping the velocity and raising the bus and output volumes, raising the velocity and dropping the volumes, pulling back on sends (while pumping up the reverb), pulling back on reverb, using a smaller "room", etc. Nothing gets me the volume needed.
    FYI - I'm using VSL almost exclusively as my samples, with Platinum as my reverb.
    We live in a world of abundance. Thanks in advance for your help.

  • High volume Printing for GLM?

    I understand from various sources GLM can support high volume label output provided the correct enhancements and support packs are installed and configured.  This may include updates to WWI.  We are currently running ECC6 EHP5, SAP_BASIS 10, EHSM 3, WWI 2.7
    I would like to get this communities inputs.  Our requirement is to generate >2000 identical labels via GLM.  WWI processing today is long (5+min) and the output is large.  Ideally system would generate 1 label with indication to print 2000 times.  Our labels include barcodes and symbols and can be complex.

    Dear Richard,
    The functions of GLM is well explained as Christopher said and using GLM, you can print labels more than 10000 witha  sequential data output. To support such large quanitity of data to be printed on labels, you may need a special printers like the Zebra High volume printer. The WWI server is equipped to print large volume of labels using specific printer plugins.
    High volume printers can be used to print print requests with print files that are too large or that contain more than 32,768 labels to be printed. The HVP is designed as a printer driver for Microsoft Windows and is connected to the label printer via a plug-in.
    With the HVP, only one page with all static data is sent to the printer. The HVP then receives all of the sequential data via an interface and automatically supplements the sequential data in the printout. The HVP also integrates changing bar codes or texts in the printout.
    The Zebra 170Xi4 is one of most popular, industrial-strength printers on the market. This rugged metal unit prints 1-color labels up to 6.6" wide, with 300 dpi print resolution in thermal or thermal transfer mode. Its extra processing power equates to speeds up to 12 inches-per-second. This one is perfect for tough applications including compliance labels, product labels, and shipping labels.
    For further information, please check the links below
    EHS - Continuous Improvement for Global Label Management - Logistics - SAP Library
    OSS notes in Global Label management
    New changes for GLM in EHP7.0 and ERP 6.0
    Dhinesh

  • Volume issues with GE60 Apache Pro-003

    I recently picked up the new GE60 Apache Pro notebook, and I've been running it for about a week now. I should of course mention that I also did a clean windows install after adding an SSD to the system (and now can't figure out how to add back that cool MSI panel that was there when I first got the system).
    Anyway, I've noticed that there seems to be volume issues with the system no matter WHAT kind of settings I tweak, or drivers I try to use. With all sliders up at maximum, the volume levels still never seem to surpass around 50% (from observing the volume gauges in the Windows Volume mixer panel). My previous notebook's speakers put out significantly higher volume, and was also able to put out far more power to my headphones (Audio Technica ATH-M50).
    Any possible ideas for the cause and solution to this problem? Thanks in advance, and let me know if there is anything else I need to mention.

    Quote from: darkhawk on 02-April-14, 20:44:14
    What are you using to gauge the '50%'? MP3's? Windows sounds? or what?
    Pretty much everything. Music, games, online videos... According to the volume mixer, the audio levels never seem to pass 50%. The grey bar bounces to the peak, but the green bar never seems to pass the 50% level. It's as if there's some sort of limiter in place I'm not aware of. And since I reformatted the notebook as soon as I got it, I was never able to see if the volume control worked correctly before the reformat.

Maybe you are looking for

  • How to set default value for a drop down in Mobile Application Studio

    Hi, We have a requirement in which for a drop down - Transaction type (Activities), the value should be defaulted to 'Visit' and the Activity Category should be 'Sales Visit'. I tried debugging the code and got the piece of code where this value is r

  • Disappearing columns in BI Publisher dataset

    We are using OBI 11.1.1.7. During datamodel creation in BI Publisher, when you edit the SQL ,the elements in the data set disappear suddenly. The only way to get them back is to go to Catalog and open that datamodel again. Has anybody run into this i

  • [xfce4] - removing options from "open with" submenu [SOLVED]

    Since installing wine, I have a number of entries in my right-click>open with menu that are similar to "Wine - Internet Explorer."  I'd like to remove these.  Any tips are appreciated. Last edited by graysky (2011-12-10 16:03:42)

  • Can we hide the lines between the columns and rows of an alv in wd abap

    HI all ,   I know that we can colour cell/column/row in an alv in wd abap.    but, can we hide the lines between the columns and rows of an alv in wd abap.      i have checked this link [hiding lines b/n rows and columns of an  alv|http://help.sap.co

  • Attribute mapping in replication

    Hi, We have around 10 servers running DS5.2P4. Now we are building DS6.3 servers for an acquired enterprise and would like to store the new directory information in one of the old 5.2 servers for data recovery. The problem is that the schema in 6.3 i