Campaign determination log was switched on

Dear Guru's
In the Order Screen We have campaign management Whcih I have Turned On.
Can you explain What is Campaign Management ? and What is the use of the same.
regards,
Amlan Sarkar

Hi Amlan,
{Reward Points so that you are encouraging the SDNers mutually help with more enthusiasm}
Campaign Management:
This Business Scenario Map shows how three parties -- a data provider, a vendor, and a customer -- can work together to plan and execute marketing activities. The map illustrates the benefits of collaboration. Results include lower costs, faster turnaround times, and more effective campaigns.
Campaign Management
This scenario addresses the following business challenges:
    1.
      Business users reliant on IT for customer segmentation, causing a bottleneck for increased time and inefficient execution
   2.
      Inability to effectively coordinate and plan campaign activities, which causes waste and redundancy
   3.
      It takes too long to complete the campaign cycle
   4.
      Communications are uncoordinated single channel u2018blastsu2019.  Organizations are providing inconsistent, irrelevant messaging to customers, through ineffective channels
   5.
      Inability to determine the best offer/channel combination
   6.
      Organizations are forced to treat all customers the same way, by sending the same message to all customers, with no ability to personalize
   7.
      Product availability and pricing is not taken into effect into the campaign plan, due to uncoordinated organizational efforts.
   8.
      Organizations are unable to understand the full value of the customer
   9.
      Inability for Business users to create and generate call lists and emails.  Often there is a bottleneck in IT, or a 3rd party, in order to develop personalized call lists and emails
  10.
      Organizations have no or limited ability to monitor campaign performance during execution, and cannot make adjustments u2018on-the-flyu2019 for maximum effectiveness
  11.
      Organizations have no ability to sort through millions of customers and massive amounts of data to understand areas of opportunity, effectiveness and ROI.  Tools on the market are not integrated into the campaign efforts
  12.
      Uncoordinated campaign task management with constant duplication of effort
Regards
AK

Similar Messages

  • WARNING - Last redo log file switch interval was 6 minutes. Next interval p

    Hello!
    I just started monitoring my Oracle via ConSol Nagios script.
    Now it show`s a Warning because:
    WARNING - Last redo log file switch interval was 6 minutes. Next interval presumably >2 minutes. Second incident in a row.
    As I do not know much about Oracle I do not understand that message.
    a) What does it mean ?
    b) Is it really critical ?
    Thanks!

    >
    I just started monitoring my Oracle via ConSol Nagios script.
    Now it show`s a Warning because:
    WARNING - Last redo log file switch interval was 6 minutes. Next interval presumably >2 minutes. Second incident in a row.
    As I do not know much about Oracle I do not understand that message.
    a) What does it mean ?
    b) Is it really critical ?
    >
    First of all: This is not an Oracle Error message. Instead, you have a somewhat arbitrarily threshold set in your non-Oracle monitoring tool. If this happens only once in a while - ignore it or modify the threshold.
    Generally, frequent fast logswitches can be the cause of performance problems.
    If you have Oracle Enterprise Manager in place and Diagnostic Pack licensed, it would give you a much more qualified statement about your Database performance. Also a recommendation to increase the Logfile size in case.
    If you look for a procedure to increase the size of your logfiles:
    http://uhesse.wordpress.com/2010/01/20/how-to-change-the-size-of-online-redologs/
    Kind regards
    Uwe Hesse
    http://uhesse.wordpress.com

  • Macbook Pro retina display 13 inch stopped charging. Charger LED also not on while plugged in. The battery fully drained off but the system can detect the charger when it was switched on.

    Macbook Pro retina display 13 inch 2014 model stopped charging. Charger LED also not on while plugged in. The battery fully drained off so the system can not be started but the system can detect the charger when earlier it was switched on before draining off the charge. Charger type magsafe 60W.

    Once you have examined the power inlet on the computer for dirt, and made certain that the pins on the end of the MagSafe cord are free to move in and out to make contact, there is very little you can do yourself.
    You will need to visit your Apple Store (FREE with appointment) or your Apple-Authorized Service Provider (policy varies). They work on these all the time and have good expertise and lots of spare parts to try to determine the problem.

  • Log file switch (checkpoint incomplete)

    hello,
    Lately, I have a lot of update/delete work on my OLTP production database. Generally this database is heavy loaded with much inserts especially at the daytime so I do my work at nights :) However, this work cause some contention. I have run AWR raport and I see "log file switch (checkpoint incomplete)" on a second place in wait events. I have a six redo log groups, each file 400MB, redo log buffer 12MB, mttr 0, checkpoint interval 0 and checkpoint timeout 1800 so checkpoints are usually "made by" log switches. Normally I have 2-5 switches per hour, but when I am doing my work it's abut 15-20 switches/hour. Can I speed it up by resizing redo files or by adding more groups ? I don't have test environment to test it so I am wondering if somone has experiense with that?
    thanks
    10gr2, linux

    Hi Helter,
    i have expirienced the same problem only with smaller log files and 3 groups, the solution you suggested helped me solve the issue, eversince i don't have any "checkpoint incomplete" error masaages in the alert log file.
    i have enlarged each group to be 100 mb (initially was 50 mb) and added 2 more groups.
    hope this solution will help you too.
    dBarak

  • RAC 10.2.0.4, event gc cr block busy & log file switch

    hello everybody,
    i would like to know if there is any dependencies between gc cr block busy and log switch in the one node of the rac cluster.
    i had a select and its completion time lasted 12 secs instead of 1, the start time of the select is the start time of the log switch on the node.
    But when i looked into the active session history the session which was standing for that select had been waiting gc cr block busy instead log file switch completion.
    While looking to the Google resources i ve noticed that "The gc current block busy and gc cr block busy wait events indicate that the
    remote instance received the block after a remote instance processing delay.
    In most cases, this is due to a log flush".
    I would be really greatfull if anybody would be able to locate the initial dependancy i ve mantioned and explain the cause of the issue as i can not quite get why the selection took so long.
    Thank you in advance!

    Did you told "log file switch"?
    you mean log file switch (checkpoint incomplete) or log file switch (archiving needed) or log file switch/archive or log file switch (clearing log file) or log file switch completion or log switch/archive
    however a instance can wait ... if you find high values about waiting, you may tune your database.
    please show us
    - Top 5 Wait Events
    SQL> alter session set nls_date_format='YYYY/MM/DD HH24:MI:SS';
    SQL> select name, completion_time from V$ARCHIVED_LOG order by completion_time ;
    Check How often do you switch logfile to archive log? ... Every switch log file... you may find "log file switch" waiting
    I see... you no high DML activitiy.
    But Please check High segment + object and query on AWR report... (example: Segments by Physical Writes )
    just investigate
    Good Luck

  • Log file switch (archiving needed)

    Hi,
    My database is on windows 2003, 10.2.0.4, recently I have been getting the following wait events on a regular basis, LOG FILE SWITCH (ARCHIVING NEEDED)...My redo files are 50MB with 5 groups...I have changed the log_archive_max_process from 2 to 3, but still the problem persists...can anyone help me regarding the same?As to what other changes can be done?and this happens only during the time of BOD and EOD.
    Thanks,
    AJ

    Hi Jonathan,
    Thanks for the reply, I am not worried abt the archiving thing for the time being,as my database was working absolutely fine before the new query that was added in the Begin of Day process. So I am trying to tweak the query.
    Folllowing is the procedure which gets called during the Begin of Day:
    CREATE OR REPLACE procedure Lms_Pr_Bod_Update_Tmp (p_import_type in number)
    as
    v_update_script varchar2(2000);
    v_upper_start_tag varchar2(10); -- san_22-apr-2009
    v_upper_end_tag varchar2(10); -- san_22-apr-2009
    begin
    declare
    cursor cur_update_tmp is
    select -- this query is for all of lov type of data
    a.destination_column_name, a.column_to_update , b.mapped_lov_syscode lov_syscode,
    c.destination_table_name,'LOV_DATA_SYSCODE' select_attribute,'LMS_LOV_DATA_MAP' select_table,
    'SOURCE_UNIQUE_ID' where_condition,'LOV_SYSCODE' lov_condition,'LOV' att_type
    from lms_import_column_info a
    inner join lms_attribute_master b
    on a.attribute_syscode = b.attribute_syscode
    and (a.column_to_update is not null or trim(a.column_to_update) = '')
    inner join lms_import_type_master c
    on a.import_type_syscode = c.import_type_syscode
    where a.import_type_syscode = p_import_type
    and b.mapped_lov_syscode is not null
    union all
    select -- this query is for all of dummy attribute
    a.destination_column_name, a.column_to_update , 1 lov_syscode,
    d.destination_table_name,c.attribute_name_internal select_attribute,
    case
    when b.applicable_for = 'INS' then 'LMS_ENTITY_INSTRUMENT'
    when b.applicable_for = 'ACC' then 'LMS_ENTITY_ACCOUNT'
    end case,
    b.attribute_name_internal where_condition,'1' lov_condition,'DUMMY' att_type
    from lms_import_column_info a
    inner join lms_attribute_master b
    on a.attribute_syscode = b.attribute_syscode
    and (a.column_to_update is not null and trim(a.column_to_update) <> ' ')
    and b.attribute_type = 'DUM' and dummy_column_type = 'FILT'
    inner join lms_attribute_master c
    on b.dummy_mapped_attribute_pk = c.attribute_syscode
    inner join lms_import_type_master d
    on a.import_type_syscode = d.import_type_syscode
    where a.import_type_syscode = p_import_type;
    begin
    v_upper_start_tag := 'UPPER('; -- san_22-apr-2009
    v_upper_end_tag := ')';          -- san_22-apr-2009
    for cr_update_tmp in cur_update_tmp loop
    if p_import_type = 4 and cr_update_tmp.column_to_update IN ('INSTRUMENT_SYSCODE','ACCOUNT_SYSCODE')
    and Upper(cr_update_tmp.destination_table_name) = 'LMS_ENTITY_TRANSACTION_TMP' then
    v_upper_start_tag := ''; -- san_22-apr-2009
    v_upper_end_tag := ''; -- san_22-apr-2009
    end if;
    v_update_script := 'UPDATE ' || cr_update_tmp.destination_table_name || ' A SET ' || ' A.' || cr_update_tmp.column_to_update || ' =
    (select ' || CR_UPDATE_TMP.SELECT_ATTRIBUTE || ' from ' || CR_UPDATE_TMP.SELECT_TABLE || '
    where ' || V_UPPER_START_TAG || CR_UPDATE_TMP.WHERE_CONDITION || V_UPPER_END_TAG || '=' || V_UPPER_START_TAG || CR_UPDATE_TMP.DESTINATION_COLUMN_NAME || V_UPPER_END_TAG ||
    ' AND ' || cr_update_tmp.lov_condition || ' = ' || cr_update_tmp.lov_syscode || ')';
    v_upper_start_tag := 'UPPER('; v_upper_end_tag := ')';
    execute immediate (v_update_script);
    end loop;
    end;
    end;
    Following is the flow of the query:
    1. A temporary table is created where in the updates can be made.
    2. Rows are inserted into ths table from the source table
    3. Updates are performed on this table
    4. Updates are then copied to the source table
    5. This procedure is alled twice, so before getting called for the second time, table is truncated.
    Thanks,
    AJ

  • Checkpoint vs. log file switch

    Hi..
    Does Oracle flush all dirty buffers at log file switch or just a part of dirty buffers?
    Doc says Full checkpoint occurs only by commands (Shutdonw or checkpoint command). I guess the meaning of this is that log file switch doen's flush all dirty buffers..I want to get some more infomaton on this.
    Thanks

    Hi Jonathan,
    Thank you for participating in the OTN topics.
    At any full checkpoint (log switch or command) DBWR
    is triggered to walk the checkpoint queue and write
    all the blocks that are currently on it - i.e. all
    the currently dirty blocks. Obviously this can take
    some time, but the task has a known completion point
    because of the queue structure.Once I've red interesting article published on Oracle Magazine RE two years ago.
    Author states (and there is no reason to distrust him) that after log switch so called low priority normal checkpoint takes place in contrast to incremental checkpoint when DBWR will perform actual writes to disk. It is not necessarily for DBWR to start checkpoint writes at the log switch immediately.
    I played a bit experimenting with log switch, global checkpoint etc making many DML at the same time to supply DBWR with dirty blocks but algorith used by Oracle is not clear at all.
    I observed that ckpt_block_writes in the v$instance_recovery view is not increasing after log switch but instead it increments occasionally (by incremental checkpoint starting I suppose).
    Can you please put some comments about this.
    Your opinion is really valuable.
    >
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    Best Regards,
    Alex
    Message was edited by:
    Alex Katykhin

  • How to tell if/when AFP logging was disabled?

    Is there any record kept anywhere in the logs as to whether AFP logging was disabled or enabled? We're missing some AFP logs so trying to track down the reason.

    There is a Script that can determine that (for files that have been saved from Photoshop at least).
    Unfortunately the copy I have carries no credits and I can’t locate it on the web, either, so I don’t know who to give credit for it …
    That being said I could send it to You if you want to give it a try.

  • Revelation: Why my diagnostic logging was not working using log4net TraceAppender

    I thought I better share with people some information that I found which has shed some light on why I've had such a hard time getting native Azure diagnostic logging working.  Hopefully, if I hit enough key words here, someone will find this discussion
    in the future and may save you some time and heart ache.
    Our application is a legacy ASP.Net application which I am porting to Azure.  The application makes extensive use of log4net and I decided that I would use the log4net TraceAppender to forward the logs to the Azure diagnostic listener.  Within
    my worker role, I was configuring my azure logs like so:
    private static void ConfigureLogging() {
    CrashDumps.EnableCollection(true);
    var diagConfig = DiagnosticMonitor.GetDefaultInitialConfiguration();
    var directories = diagConfig.Directories;
    var infrastructureDiagnostics = diagConfig.DiagnosticInfrastructureLogs;
    var applicationLogs = diagConfig.Logs;
    var eventLogs = diagConfig.WindowsEventLog;
    SetTransferPeriod(applicationLogs, 1);
    SetTransferPeriod(directories, 1);
    SetTransferPeriod(infrastructureDiagnostics, 1);
    SetTransferPeriod(eventLogs, 1);
    SetFilterLevel(applicationLogs, LogLevel.Information);
    SetFilterLevel(eventLogs, LogLevel.Information);
    SetFilterLevel(infrastructureDiagnostics, LogLevel.Warning);
    DiagnosticMonitor.Start("DiagnosticsConnectionString", diagConfig);
    Log4NetHelper.ConfigureLog4Net();
    private static void SetFilterLevel(WindowsEventLogsBufferConfiguration eventLogs, LogLevel logLevel)
    eventLogs.DataSources.Add( "Application!*" );
    eventLogs.DataSources.Add( "System!*" );
    eventLogs.ScheduledTransferLogLevelFilter = logLevel;
    private static void SetFilterLevel(BasicLogsBufferConfiguration infrastructureDiagnostics, LogLevel logLevel)
    infrastructureDiagnostics.ScheduledTransferLogLevelFilter = logLevel;
    private static void SetTransferPeriod(DiagnosticDataBufferConfiguration directories, int minutes)
    var period = TimeSpan.FromMinutes(minutes);
    directories.ScheduledTransferPeriod = period;
    Log4NetHelper.Configure uses the log4net programattic API to setup and configure a TraceAppender that captured all LogLevels (DEBUG or higher).  I also made sure that the follwing appeared in my Web.Config
    <system.diagnostics>
    <trace autoflush="false" indentsize="4">
    <listeners>
    <clear />
    <add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="AzureDiagnostics">
    <filter type="" />
    </add>
    </listeners>
    </trace>
    </system.diagnostics>
    However, this never worked.  I saw a number of forum posts that suggested that the Azure filter level should be "Undefined" - namely that I should replace
    SetFilterLevel(applicationLogs, LogLevel.Information);
    with
    SetFilterLevel(applicationLogs, LogLevel.Undefined);
    I had set my filter level to Information, cos our app does so much Debug level logging, that I thought I'd save money in storage by really only capturing the INFO or higher messages.  But in the interest of getting the blasted thing to work, I set to
    Undefined, and sure enough all the log message from all levels came through. Setting back to Information and NO messages come though.
    I then noticed that when my logging was working (filter = Undefined) that ALL my log messages were in fact being logged at the Verbose level - even log4net calls to log.Error/log.Warn (exceptions messages being logged at Verbose is not a good sign!)
    You have to realize, that my thought process was not structured at this point as I had no idea what was going on.  Until I found this article:
    http://www.dotnetsolutions.co.uk/blog/archive/2010/02/22/windows-azure-diagnostics-%E2%80%93-why-the-trace-writeline-method-only-sends-verbose-messages/
    While not related to log4net, it was an eye opener, and I used reflector to look at the TraceAppender, and I see they are using Trace.Write to write messages.  
    In my mind, this explains why I was seeing the behaviour I was seeing:  AzureDiagnostic listener is converting all Trace.Write messags to Verbose (details in the above article) and log4net TraceAppender is converting all log messages to Trace.Write
    messages.  Setting the filter to Information was thus stripping all the calls.
    My solution is to inherit TraceAppender and change their impl of Append to use a switch on the LogLevel:
    using System.Diagnostics;
    using log4net.Appender;
    using log4net.Core;
    namespace XXX.Azure
    public class AzureTraceAppender : TraceAppender
    protected override void Append(LoggingEvent loggingEvent)
    var level = loggingEvent.Level;
    var message = RenderLoggingEvent(loggingEvent);
    if (level >= Level.Error)
    Trace.TraceError(message);
    else if (level >= Level.Warn)
    Trace.TraceWarning(message);
    else if (level >= Level.Info)
    Trace.TraceInformation(message);
    else
    Trace.Write(message);
    if (ImmediateFlush)
    Trace.Flush();
    With log4net configured to use this appender, then log messages appearing in the WADLogsTable are all appearing at the correct (or nearest appropriate) level
    YMMV
    Pete

    Thanks, that's extremely useful!
    Using your appender, Verbose level messages still were not shown though. What did the trick was changing
    Trace.Write(message);
    to
    Trace.WriteLine(message);

  • Condiion type to Campaign determination

    Hi gurus
    I´m customizing Campaign Determination, then When I try to create a new condition type to the usage: CD (Campaign Deter.) system don´t let me select that option, always show me PR usage as default, Could someone tell was wrong???
    Thanks a lot

    Hi
    Prakash, very helpful your link, now let me explain you what is my request: I need to check when a product is being used in other campaign/plan when a Camp/Plan is created (error or warning)???
    Someone told me, I can use Campaign determination for it under below path:
    In the CRM IMG follow the path > CRM> Marketing-> Marketing Planning and Campaign Management -> Condition Maintenance -> Define Condition Generation, using 02 as selection in "Conflit resolution value"
    But my problem is why I can not create CD Condition Type
    in CRM?
    Please, tell me if there is another solution

  • When I was talking on phone, suddenly the phone was switched off. i tried to switch it on but it gave the message....connect to itunes for set up.  when I connected it to itunes...it gave the message, itunes can not read data from this iphone, restore it

    when I was talking on phone, suddenly the phone was switched off.
    i tried to switch it on but it gave the message....connect to itunes for set up.
    when I connected it to itunes...it gave the message, itunes can not read data from this iphone, restore it to factory settings. It also said while restoring ypu will lose all media data but you can restore the contacts.
    I restored the factory settings....the phone was on recovery mode...it was verified by itunes and all that..but in the end it again said that iphone has some problem and can not function right now.
    after that when ever i connect it with itunes, it gives the message, it can not activate the iphone further, try again later or contact customer service.
    What to do now?????? Customer service people say..it is hardware problem

    If it's a hardware problem, then the phone will need to be replaced.
    There is no magic that can fix a hardware problem.

  • Campaign determination in R/3 based in CRM campaigns

    Hi,
    We have CRM4.0.
    Can you confirm me if I create a campaign in CRM with planned dates without condition records when I create a sales order in R/3 that campaign is determined automatically? Or the R/3 only works with pricing/condition campaigns?
    Thanks and Regards,
    SF

    Dear Diogo (myself!) and Sara,
    After some hard work finally the answer is clear. It is possible to have campaign determination in R3 sales order for an marketing trade promotion (without customer or pricing conditions).
    As an example if there is a trade promotion for indoors(publicity) and no pricing discount or free goods, you can have this campaign determined in correspondent sales orders.
    Indeed it is necessary to used conditions technique but not pricing condition in CRM trade/spends tab.
    Please reward points if it helped.
    Best Regards,
    Diogo

  • " Campaign determination " error in Trade Promotions

    I am working on the Trade Promotion functionality on a Target Group . The conditions are getting generated and appearing in the discounts tab. Also the target group is picking up only the relevant BPs and showing them in the Volume/Trade Spend Tab but while releasing the Trade Promotion,I am receiving n error . "Campaign Determination records cant be generated "and "enter a hierachy business partner" . Since the Trade promotions on the BP hierarchy are being realased without errors , I am wondering ,is there any particular configration which needs to be done in specific for the Target Group as customer . Could anybody please help me in this regard?

    Look into this and see if it helps: http://help.sap.com/saphelp_crm50/helpdata/en/c1/e60741375cf16fe10000000a1550b0/frameset.htm

  • Capture Proccess offen abend when online log is switching

    enviroment:
    Red Hat Enterprise Linux Server release 5.8 (X64)
    Oracle Database 11.2.0.3.0
    Oracle Golden Gate 11.1.1.1.2
    Problome:
    Capture Proccess offen abend when online log is switching.(Not every time)
    I find errors in ggserr.log:
    2013-05-01 22:10:53 ERROR OGG-01028 Oracle GoldenGate Capture for Oracle, cap.prm: error reading redo log file,
    '+DATA/orcl/onlinelog/group_2.265.803917775' for sequence 1021: Reading ASM file +DATA/orcl/onlinelog/group_2.265.803917775
    in DBLOGREADER mode: (333) ORA-00333: redo log read error block 42395 count 4081.
    2013-05-01 22:10:53 ERROR OGG-01668 Oracle GoldenGate Capture for Oracle, cap.prm: PROCESS ABENDING.
    Vut I can't find any error in database's log.xml but 'Incomplete read from log member
    &apos;+DATA/orcl/onlinelog/group_2.265.803917775&apos;. Trying next member.'.
    I find that when this error happened ,the database is switching it's online log.It is not happened in a specific online
    log.And when Database switch online log to the 'error' online log back,capture proccess will work correctly.
    Here is my configure:
    GGSCI (TIKUDB) 1> view params cap
    EXTRACT cap
    SETENV (ORACLE_SID = orcl)
    USERID ogg, PASSWORD ogg
    TRANLOGOPTIONS DBLOGREADER
    TRANLOGOPTIONS LOGRETENTION ENABLED
    EXTTRAIL dirdat/lt
    TABLE TQMS.TEST;
    Please help me !

    xyz_hh wrote:
    enviroment:
    Red Hat Enterprise Linux Server release 5.8 (X64)
    Oracle Database 11.2.0.3.0
    Oracle Golden Gate 11.1.1.1.2Would it be possible to try upgrading to OGG 11.2.1, and see if the problem persists?

  • Campaign Determination - Sales Order

    Hi Experts,
    Can we do campaign determination at Sales Order level, if yes can any one pls let me know how to do the settings in SAP CRM 2007.
    Thanks in advance.
    Regards
    Vinod

    Anyone has any suggestions/ideas for this?
    Thanks
    Alok

Maybe you are looking for

  • Question about LRU in a replicated cache

    Hi Tangosol, I have a question about how the LRU eviction policy works in a replicated cache that uses a local cache for its backing map. My cache config looks like this: <replicated-scheme> <scheme-name>local-repl-scheme</scheme-name> <backing-map-s

  • Can I use time capsule for tranfering entire computer.

    I have a new macbook that Apple messed up on and I am having to send back. I will be getting the ne 17'macbook instead. I am hoping that if I use time capsule and time machine I will be able to put all applications and data from the macbook, send bac

  • Interface's parent

    how can an interface has a parent? the parent i want is an abstract class. thanks

  • Download problems with Elements 10

    After downloading the first of the three discs for elements, my computer had to restart, and when I put the second disc in, it asked for the serial number again. After typing it in, it rejects it as invalid. Now I cant download all three discs. When

  • Handoff is not fully working

    Hello. I can't fully pair my iPhone 5 (with iOS 8.1) and MacBook Air (OS X 10.10 Yosemite, Mid 2013) via Bluetooth for full functionality of new feature called "Handoff". For me: FaceTime + FaceTime Audio + SMS is working for me also together with Ai