Entire GL A/c process

Hi Experts,
M fresher in SAP FICO filed. I need to test one dummy  GL a/c name as WIP. can anybody suggest me the entire process for GL a/c. Including invoice process till  making payment. bcos i need test in FBL3n, where i can get open item, cleared items, all items.
waiting for ur swift reply
Thanks in advance

Thanks Aditya,
In standard report FBL3N, (GL A/c-WIP)while executing with cleared item check box, output is not displaying, but in Open items we are getting open as well as cleared items.
When i go to the Tcode FBL3N im giving the input with GL account XXXXXX and selecting the Open Item it is picking Up all the cleared and open items where business required only Open item
later i came to know, In FS00 they are suppose to select open item as well as line items, but they have selcted only line item, may be that is the reason its not allow us to pull reports in fbl3n.
So now m planning to create dummy gl in dev server for testing purpose. for that i need to assign entire  process in such a way that i should get it open item,cleared items, n all items.
Pls suggest any solution for the same
Regards,
Vinayaka

Similar Messages

  • I copied all of my CD's onto my iMac's hard drive. Now I'd like to buy a new iMac and am wondering how to transfer the music I have loaded on  my iMac from CD's on to my new computer w/o  going through the process of loading them into my iTunes Library.

    I have copied all of my CD's, probably over 100, on to my iMac, through iTunes and now am considering buying a new iMac. Would like to be able to transfer what's in my music library w/o having to recopy all of the CD's. Any thoughts? Can I do it throught the iCloud or do I need to buy and external storage device?
    Thanks Brenda

    When you buy your new iMac you will be able to have the installation process transfer all your applications, data (including music, pictures, documents, etc) and settings to your new computer. In the past that was done via Firewire. I'm not entirely sure of the process with the new iMacs that don't have firewire. I think it can be done via ethernet and probably via tha thunderbolt cord using a firewire adapter but you'll want to check that out.
        With the your new computer properly connected to your old computer, the initial booting of the new computer will include the ability to move everything over.
        If for some reason you don't want to do that (smart if you have some horrible software issue that is causing problems) you can connect your old computer to the new one and start it in Target Disk mode where it will be seen as a hard drive. You can then manually copy over all your music and import that into iTunes (and any other data you want).
        Somebody here will surely point you to directions for connecting an older iMac with no Thunderbolt to a newer iMac with no Firewire.

  • Batch processing InDesign CS5 files

    When I select either individual InDesign files or the entire folder to batch process to a pdf I get a message that says that the InDesign files can't be found or opened. It initially wanted to launch ID CS4 so I uninstalled that thinking that was the problem. Acrobat still will not batch create pdfs saying the application can not be found. I can create a pdf directly from the ID file but because I have llmost 300 ID files to convert I would really prefer to be able to batch create these pdfs. I am using Acrobat 9.0. Help!

    To get a PDF, via Acrobat, from a non-PDF file format the file format must be supported.
    Acrobat has various ways to take supported file formats and provide a PDF.
    Looking at what is supported by PDFMaker
      (http://helpx.adobe.com/acrobat/kb/compatible-web-browsers-pdfmaker-applications.html)
    and at what is supported for direct file conversion
      (http://helpx.adobe.com/acrobat/kb/supported-file-formats-acrobat-reader.html)
    the InDesign file formats are not listed.
    Based on this I'd say you are going to have to use InDesign's internal process to get PDF from the InD files.
    Be well...

  • New HP HDD load recovery disks fails to boot after install process on pressing continue

    I have Pavilion d7-4165dx notebook. I live out US, though made purchase from US (BestBuy).
    - HDD failed after renewing trial period of virus software.
    - All diagnostics would not recognize HDD. 
    - HP support indicated still in warranty, but I am out of country.  HP indicated most likely issue was HDD cable.
    - Purchase cables and had someone hand carry out of US.
    - Installed, still same response.
    - Ordered new HDD from HP.  Backordered.  Finally on inquiring was directed to a US 3rd part rep.  He recommended buying directly from outlet, but a higher speed.  Did that and then HP replacement HDD arrives.  Hand carry both out of US.
    - Install faster (slightly smaller) Seagate 500G 7200.  Started recovery with recovery disks (5 total that received from BestBuy, but not OEM looking), but is squawked as new HDD is smaller than original WD 640G 5400.   Put WD into #1 bay and Seagate into #2.
    - Start recovery disk process.  Go through entire recovery from factory process and on last disk, it says to not touch while PC boots on and off, etc.  Then says press Continue to continue.
    - When press continue, simply goes to black screen with "No boot device..."
    - Run other options, same message.
    - Ran diagnostics.  HDD all OK.
    - Called HP support.  He indicates I need to buy the HP recovery disks.  The ones I have (5) are not the right one, that there should only be 3!  They are as I received from BestBuy, but look like they were made there when BestBuy sets up the PC. 
    Any suggestions anyone?
    This question was solved.
    View Solution.

    Sorry, I don't have an answer, but rather a similar issue.  I replaced a 320Gb - 5400 rpm drive that failed with a 750Gb - 7200 rpm drive and even though I have the factory restore disks, it won't recognize the hard drive and I too get the "No Boot device" message.  Unfortunately, the BIOS is not the best and it won't allow me to see if it actually recognizes the new hdd, which I suspect it doesn't.  Either that or I have a bad MB.
    Sytem information - HP G71-340US
    Processor - 2.20 GHz Intel Core2 Duo Processor T6600
    BIOS - InsydeH20 3.5
    Recovery manager *appears* to be installing W7 just fine, but it's a no go when it's time to actually boot to it.  Is this a mb issue or did I just buy the wrong hd?

  • Batch Processing of Messages

              How can I set up WL7.0 so that a certain number of messages are delivered to a
              consumer? In other words insteading of having one message delivered to the messagelistner,
              I want to have x number of messages delivered so that I can process them all at
              once. Thanks for advice
              

    Setting delivery time for a message is a WL specific feature.
              It is described in the Programmer's Guide:
              http://edocs.bea.com/wls/docs81/jms/implement.html#1235262
              For a list of all WL specific features start here:
              http://edocs.bea.com/wls/docs81/jms/intro.html#jms_features
              Jeba Bhaskaran wrote:
              > How to set a delivery time for a message? I looked the doc for message and didn't
              > see anything that allowed me to set time for delivery.
              >
              > Tom Barnes <[email protected]> wrote:
              >
              >>for example
              >>
              >>during client init {
              >> msgcount=0
              >> sendtimermsg();
              >>}
              >>
              >>sendtimermsg() {
              >> send non-persistent timer message
              >> with a delivery time set to one second into future
              >>}
              >>
              >>onMessage(msg) {
              >> if (msg is the timer message) {
              >> if (msgcount > 0) session.acknowledge
              >> sendtimermsg()
              >> return
              >> }
              >> do msg work
              >> if (++msgcount > 10) {
              >> msgcount=0
              >> session.acknoledge();
              >> }
              >>}
              >>
              >>The above is a bit crude in that
              >>it acknowledges on every timer
              >>msg. Could be refined so that
              >>a timer msg is sent on the first
              >>message of a batch, and so that a
              >>timer msg is ignored if it is a timer
              >>msg from a previous batch.
              >>
              >>Jeba Bhaskaran wrote:
              >>
              >>
              >>>Can you explain method #1 little bit more? How can I do batch processing
              >>
              >>by using
              >>
              >>>this method? Thanks for your help.
              >>>
              >>>Tom Barnes <[email protected]> wrote:
              >>>
              >>>
              >>>>Jeba Bhaskaran wrote:
              >>>>
              >>>>
              >>>>
              >>>>>I have a startup class that waits for the messages. During start up
              >>>>
              >>>>I get the connection
              >>>>
              >>>>
              >>>>>and session and set the startup class as the message listener.
              >>>>
              >>>>Registering a listener in a startup class, or running any
              >>>>kind of thread in a startup class, is not supported. That being said,
              >>>>we do have multiple customers who do this. Sigh. The main
              >>>>thing to be aware of is thread specific context that can
              >>>>get lost:
              >>>> transaction context, class-loader/class-not-found errors,
              >>>> security context
              >>>>If your testing reveals such problems, search this newsgroup
              >>>>for info using google's newsgroup search method. Keywords
              >>>>"startup" and "classnotfound" ought to be sufficient.
              >>>>
              >>>>
              >>>>
              >>>>
              >>>>>After processing
              >>>>>the first message, I call the receive method on the receiver that
              >>
              >>was
              >>
              >>>>obtained
              >>>>
              >>>>
              >>>>>at startup. Is this correct?
              >>>>
              >>>>Use a different receiver than the one you use as the message listener,
              >>>>but make sure both receivers are on the same session. Alternatively,
              >>>>you can use two sessions, but make sure to acknowledge both sessions.
              >>>>
              >>>>
              >>>>
              >>>>>I put system out's and see that the onMessage is
              >>>>>being called for each of the messages in the queue instead of the
              >>
              >>drain
              >>
              >>>>code getting
              >>>>
              >>>>
              >>>>>each message.
              >>>>
              >>>>Remember to set the "MessagesMaximum" on the connection factory to
              >>
              >>one.
              >>
              >>>>
              >>>>>Tom Barnes <[email protected]> wrote:
              >>>>>
              >>>>>
              >>>>>
              >>>>>>something like
              >>>>>>
              >>>>>>onMessage(Message firstMsg) {
              >>>>>>int count = 0;
              >>>>>>process firstMsg
              >>>>>>nextMsg = syncConsumer.receive(/* block up to */ 10 /* millisecs
              >>>>
              >>>>*/);
              >>>>
              >>>>
              >>>>>>while (nextMsg != null) {
              >>>>>> process nextMsg
              >>>>>> if (++count > 10) break;
              >>>>>> nextMsg = syncConsumer.receive(10);
              >>>>>>}
              >>>>>>nextMsg.acknowledge();
              >>>>>>}
              >>>>>>
              >>>>>>Supply your own exception handling.
              >>>>>>
              >>>>>>I think you may need to
              >>>>>>configure "MessagesMaximum" to 1 on the
              >>>>>>connection factory so that
              >>>>>>the async consumer message backlog (pipe-line)
              >>>>>>doesn't accumulate messages.
              >>>>>>
              >>>>>>Jeba Bhaskaran wrote:
              >>>>>>
              >>>>>>
              >>>>>>
              >>>>>>
              >>>>>>>Sorry the previous reply by accident. Would it be possible for you
              >>>>>>
              >>>>>>to provide a
              >>>>>>
              >>>>>>
              >>>>>>
              >>>>>>>sample code for #2, just the code snippet for onMessage(). The code
              >>>>>>
              >>>>>>I wrote is
              >>>>>>
              >>>>>>
              >>>>>>
              >>>>>>>not draining the queue properly. Thanks for your help.
              >>>>>>>
              >>>>>>>Jeba
              >>>>>>>
              >>>>>>>
              >>>>>>>
              >>>>>>>"Jeba Bhaskaran" <[email protected]> wrote:
              >>>>>>>
              >>>>>>>
              >>>>>>>
              >>>>>>>
              >>>>>>>>Tom Barnes <[email protected]> wrote:
              >>>>>>>>
              >>>>>>>>
              >>>>>>>>
              >>>>>>>>
              >>>>>>>>>Hi Jeba,
              >>>>>>>>>
              >>>>>>>>>with acknowledge policy configured to "acknowledge-previous".
              >>>>>>>>>Then acknowledge every Nth message in the onMessage().
              >>>>>>>>>To handle batches smaller than N,
              >>>>>>>>>some kind of timer needs to be set to
              >>>>>>>>>"wake-up" the client and acknowledge the batch - one
              >>>>>>>>>way to create a timer is via scheduled messages
              >>>>>>>>>received from a temporary queue.
              >>>>>>>>>
              >>>>>>>>>Method 2) Create a synchronous consumer on the
              >>>>>>>>>same session and call this synchronous consumer's
              >>>>>>>>>receive() method from within the asynch
              >>>>>>>>>consumer's onMessage(). Use a short timeout
              >>>>>>>>>on the receive. When N messages are received,
              >>>>>>>>>or when the receive() returns null, acknowledge
              >>>>>>>>>the messages. (Again using client-acknowledge.)
              >>>>>>>>>
              >>>>>>>>>Method 3) Use a synchronous consumer with
              >>>>>>>>>a dedicated thread and a short timeout receive().
              >>>>>>>>>Using a synchronous
              >>>>>>>>>consumer allows for using JTA transactions rather
              >>>>>>>>>than client-ack. Such
              >>>>>>>>>transactions can actually improve overall
              >>>>>>>>>throughput in batching scenarios as
              >>>>>>>>>the entire batch can be processed by the
              >>>>>>>>>involved resources in fewer aggregrated disk I/Os
              >>>>>>>>>(see bridge performance notes in white-paper
              >>>>>>>>>below).
              >>>>>>>>>
              >>>>>>>>>Some of the above info is included in the
              >>>>>>>>>JMS Performance Guide white-paper available here:
              >>>>>>>>>
              >>>>>>>>>http://dev2dev.bea.com/technologies/jms/index.jsp
              >>>>>>>>>
              >>>>>>>>>Tom
              >>>>>>>>>
              >>>>>>>>>Jeba Bhaskaran wrote:
              >>>>>>>>>
              >>>>>>>>>
              >>>>>>>>>
              >>>>>>>>>
              >>>>>>>>>>How can I set up WL7.0 so that a certain number of messages are
              >>>>
              >>>>delivered
              >>>>
              >>>>
              >>>>>>>>>to a
              >>>>>>>>>
              >>>>>>>>>
              >>>>>>>>>
              >>>>>>>>>
              >>>>>>>>>>consumer? In other words insteading of having one message delivered
              >>>>>>>>>
              >>>>>>>>>to the messagelistner,
              >>>>>>>>>
              >>>>>>>>>
              >>>>>>>>>
              >>>>>>>>>
              >>>>>>>>>>I want to have x number of messages delivered so that I can process
              >>>>>>>>>
              >>>>>>>>>them all at
              >>>>>>>>>
              >>>>>>>>>
              >>>>>>>>>
              >>>>>>>>>
              >>>>>>>>>>once. Thanks for advice
              >>>>>>>>>
              >
              

  • Framework/ Blanket Purchase Order Processing

    Hello Experts!
    Is there anyone here who can explain the entire blanket/ framework order processing for stock items (document type FO but item category is blank is this correct?) from purchase requisition to goods receiving. Also how will I process the succeeding deliveries? In what transaction can I generate the delivery schedule based on MRP run which I can send to the vendor? Please help on this.
    Thanks in advance! =)

    Hi,
    I don't think it better option bcoz if you use item catagory blank then there will be no limit to the value which is main in the whole Blanket PO scenerio...
    and after doing GR if there will be no value limit ..then the process will get improper...
    The normal transaction for del schedule is me38...but with MRP...when you maintain the your agreement in the source list as a scource as well as MRP relevent then the system automatically does all next work if the source is found unique...
    Hope it helps..
    Other questions be updated subsiquently...
    Regards,
    Priyanka.P
    AWARD IF HELPFULL
    Edited by: Priyanka Paltanwale on Jul 24, 2008 11:42 AM

  • T codes with similar functions in AP and AR process

    Hi,
    I need  list of T codes which has similar functions in entire AP and AR process.
    Thanks
    SM

    Hi,
      From the menu  Extras > Settings >   Tick "Show Technical Names".
    The look through the standard menu for AP & AR . Items with similar descriptions = Similar T-Code.
    regards

  • Does APEX re-write output HTML from an application process

    I have a process which has the following line
    htp.prn('<input type="hidden" name="F05" value="' || l_submitted_previously || '"/>');this however is outputting the following html
    <input name="F05" value="N" type="hidden">as you can see the parameters in the tag are not in the order I specified and also the closing slash is omitted.
    Can anyone explain this?

    APEX 3.2.1.00.12
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    Unfortunately I doh't know the web server architecture
    Firefox v13
    Custom theme
    Custom templates
    Hidden item
    I don't think the last 4 items listed above will have a bearing on this because it's a dynamically produced item created by an application process. I have pasted the entire code for the process in case that helps.
    DECLARE
         l_year_passed number := wwv_flow.g_x01;
         l_expenditure_value number;
         l_submitted_previously varchar2(1) := 'N';
    BEGIN
         SELECT DECODE(COUNT(*),0,'N','Y')
         INTO l_submitted_previously
         FROM F_EXPENDITURE
         WHERE FINANCIAL_YEAR_KEY = l_year_passed;
        for rec in (SELECT *
                    FROM l_service_group
                    WHERE service_designated_year<=l_year_passed
                    AND service_designated_year>0
                    order by NON_SERVICE_GROUP_FLAG, SERVICE_GROUP_SHORT_NAME)
         loop
         BEGIN
              SELECT EXPENDITURE_VALUE
              INTO l_expenditure_value
              FROM F_EXPENDITURE
              WHERE FINANCIAL_YEAR_KEY = l_year_passed
              AND SERVICE_GROUP_KEY = rec.SERVICE_GROUP_KEY;
              EXCEPTION
              WHEN NO_DATA_FOUND THEN
              l_expenditure_value := NULL;
         END;
              htp.prn('<tr><td><label for="F01_' || rec.SERVICE_GROUP_KEY || '"><span class="t13RequiredLabel"><img src="/i/requiredicon_status2.gif" alt="">' || rec.SERVICE_GROUP_SHORT_NAME || '</span></label></td><td><input type="text" name="F01" id="F01_' || rec.SERVICE_GROUP_KEY || '" value="'||l_expenditure_value||'"/><input type="hidden" name="F02" id="F02_' || rec.SERVICE_GROUP_KEY || '" value="' || rec.SERVICE_GROUP_KEY || '"/></td></tr>');
         end loop;
         if l_year_passed = 0 then
              htp.p('<p>Please select a financial year</p>');
         end if;
         htp.prn('<input type="hidden" name="F05" value="' || l_submitted_previously || '"/>');
    END;

  • Avoid duplicates processing

    Hi All
    I have a requirement in which I get a flat from source system everyday
    I load this flat file into a staging table. This flat file can contain upto 6 million records
    Next day when the flat file comes it will come with changed and unchanged records.
    ie even if a record has not changed it will again come up in the file, the next day.
    From the staging table, this data is processed and goes to other tables.
    Since 75% of data will be duplicate, I do not want to process all the data everyday.
    How do I do this? Getting only the unchanged records from source system is ruled out and also
    I cant do a "diff" command on Unix between new and old files to load only changed records
    I am thinking of 2 ways to do it
    1) Create a duplicate set of table similar to staging table and load the previous days data in
    this. While selecting the data use the MINUS operator on the table and do the processing
    2) Retain the previous days data in the same table and while processing SELECT DISTINCT
    from the table .
    Which is the best way to do this, or any other alternate solutions to this
    Performance is a major factor here and am using Oracle 8i.
    Thanks
    Ashwin N.

    If you are going to pre-process the rows to eliminate the duplicates, I would suspect that the MINUS approach would be faster. I would do something like:
    load into new
    SELECT * FROM new
    MINUS
    SELECT * FROM old;
    processing to real table
    DROP TABLE old;
    RENAME new TO old;
    CREATE TABLE new AS
    SELECT * FROM old
    WHERE 1=0;However, I'm not entirely convinced that the processing into the real table would be enough faster to offset the time taken to eliminate the duplicates before processing. My first impulse, if you can do your processing in a SQL statement, would be to just load the records into an empty staging table every day and do an update and an insert statement something like:
    UPDATE real_table r
    SET (col1, co2, col2, ... , coln) =
        (SELECT col1, co2, col2, ... , coln
         FROM staging_table s
         WHERE r.pk1 = s.pk1 AND
               r.pk2 = s.pk2 AND
               (r.col1 <> s.col1 OR
                r.col2 <> s.col2 OR
                r.coln <> s.coln)
    WHERE EXISTS (SELECT 1
                  FROM staging_table s
                  WHERE r.pk1 = s.pk1 AND
                        r.pk2 = s.pk2);
    INSERT INTO real_table
    SELECT *
    FROM staging_table s
    WHERE NOT EXISTS (SELECT 1
                      FROM real_table r
                      WHERE s.pk1 = r.pk1 AND
                            s.pk2 = r.pk2);You should rebuild the PK on staging_table after the load. If you do this paralell, nologging, it should be reasonably quick. Even better would be to get the data sorted by PK fro mthe source system, because you could then use the NOSORT option on the index.
    TTFN
    John

  • Can't "get" entire site

    Hello,
    I'm trying to get my entire site using DW 5.5 so that I can work on it locally, but lots of the file are being left behind. I follow the instructions to do so, click yes when prompted if I want to get the entire site and the process begins. It then stops, saying that it's complete, but most of the data is missing. It manages to get about 350mb but the total size of the site is nearly 4gb.
    Any advice much appreciated.
    Cheers in advance,
    Matt.

    Why don't you just access the site by ftp and get the files...........I never use Dreamweaver for putting or getiing anything.....it's useless in that regards.
    Geez just read 4gb.... what is it a hi-res image/video library? I would definitely either use ftp or maybe go into the sites control panel and make a back up of the site files and download them from there. Most control panels send you a link to a download zip file once the server has backed up the files.

  • Partial message processing

    Hi,
    If i have 100 records in a file , each record needs to generate idoc.
    if one record data is wrong in file , in XI  only that record will not generate idoc or complete message stops?
    and no idoc will be generated?
    bcz i have some scenarios in my project of following types.
    1) correct records should be processed, error records should not be processed.
    2) if one error record in the file entire file should not process.
    can anyone suggests in both situations what are the best ways to do.
    regards,
    venu.

    Hi,
    This might be one of the solutions: (using BPM)
    1. Have an interface mapping between the  source sender interface and the receiver abstract (BPM) interface.
    This interface mapping does the required validations.
    Ex: It might check if all the mandatory fields are present. And if the values of all the tags are correct.
    If there is something wrong, you would set the status of a tag to 1.
    2. Within the BPM, after receiving the validated message, check if the <status> tag is 1, if so, stop the processing, else, carry out the transformation step to map the record to the idoc.
    3. After this, send the idoc to the target system, using the send step.
    Regards,
    Smitha.

  • SSIS and CDC - Incorrect state at end of "Mark Processed Range"

    The Problem
    I currently have CDC running on a table named subscription_events.  The corresponding CT table is being populated with new inserts, updates, and deletes.
    I have two SSIS flows that move data from subscription_events into another table in a different database.  The first flow is the initial flow and has the following layout:
    The Import Rows Into Vertica step simply has a source and a destination and copies every row into another table.  As a note, the source table is currently active and has new rows flowing into it every few minutes.  The
    Mark Initial Load Start/End steps store the current state in a variable and that is stored in a separate table meant for storing CDC names and states.
    The second flow is the incremental flow and has the following layout:
    The Import Rows Into Vertica step uses a CDC source and should pull the latest inserts, updates, and deletes from the CT table and these should be applied to the destination.  Here is where the problem resides; I
    never receive anything from the CDC source, even though there are new rows being inserted into the
    subscription_events table and the corresponding CT table is growing in size with new change data.
    To my understanding, this is how things should work:
    Mark Initial Load Start
    CDC State should be ILSTART
    Data Flow
    Mark Initial Load End
    CDC State should be ILEND
    Get Processing Range (First Run)
    CDC State should be ILUPDATE
    Data Flow
    Mark Processed Range (First Run)
    CDC State should be TFEND
    Get Processing Range (Subsequent Runs)
    CDC State should be TFSTART
    Data Flow
    Mark Processed Range (Subsequent Runs)
    CDC State should be TFEND
    Repeat the last three steps
    This is not how my CDC states are being set, though... Here are my states along the same process.
    Mark Initial Load Start
    CDC State is ILSTART
    Data Flow
    Mark Initial Load End
    CDC State is ILEND
    Get Processing Range (First Run)
    CDC State is ILUPDATE
    Data Flow
    Mark Processed Range (First Run)
    CDC State is ILEND
    Get Processing Range (Subsequent Runs)
    CDC State is ILUPDATE
    Data Flow
    Mark Processed Range (Subsequent Runs)
    CDC State is ILEND
    Repeat the last three steps
    I am never able to get out of the ILUPDATE/ILEND loop, so I am never able to get any new data from the CT table.  Why is this happening and what can I do to fix this?
    Thank you so much, in advance, for your help!  :)
    Edit 1
    Here are a couple of articles that sort of describe my situation, though not exactly.  They also did not help me resolve this issue, but it might help you think of something I can try.
    http://www.bradleyschacht.com/understanding-the-cdc-state-value/
    http://msdn.microsoft.com/en-us/library/hh231087.aspx
    The second article includes this image, which shows the ILUPDATE/ILEND loop I am trapped in.
    Edit 2
    Last week (May 26, 2014) I disabled then re-enabled CDC on the subscription_events table.  This didn't change anything, so I then disabled CDC on the entire database, re-enabled CDC on the database, and then enabled CDC on the
    subscription_events table.  This did make CDC work for a few days (and I thought the problem had been resolved by going through this process).  However, at the end of last week (May 30, 2014) I needed to re-load the entire table via
    this process, and I ran into the same problem again.  I'm still stuck in this loop and I'm not sure why or how to get out of it.
    Edit 3
    Before I was having this problem, I was having a separate issue which I posted about here:
    CDC is enabled, but cdc.dbo<table-name>_CT table is not being populated
    Could running these commands have created this problem for me on SQL Server?

    Hi Keolanui,
    Thank you for your question.
    I am trying to involve someone more familiar with this topic for a further look at this issue. Sometime delay might be expected from the job transferring. Your patience is greatly appreciated.
    Regards,
    Mike Yin
    If you have any feedback on our support, please click
    here
    Mike Yin
    TechNet Community Support

  • LO Cockpit Extraction Process...

    Hi Gurus,
    I want help in undersatanding the following:
    1. How can we Initialize Delta Process Selectively?
    2. What is the difference between Queued Delta and Unserialized V3
    Update?
    3. In the entire LO Cockpit Extraction process where do we use the V1
    and V2 updates?
    4. What is the ENQUEUE concept?
    Thanks
    NewBW

    Hi,
    Try with these documents:
    www.service.sap.com/bi --> Documentation --> Business Content Documentation:
    - 2.0B Extracting MM (Purchasing) Transaction Data
    - 2.0B Extraction of Logistics Transaction Data
    - 2.0B Extraction of logistics transaction data for retail
    - Extracting Sales and Distribution Transaction Data - vers. 2
    You could also go through these weblogs on LO extraction
    /people/sap.user72/blog/2004/12/16/logistic-cockpit-delta-mechanism--episode-one-v3-update-the-145serializer146
    /people/sap.user72/blog/2004/12/23/logistic-cockpit-delta-mechanism--episode-two-v3-update-when-some-problems-can-occur
    /people/sap.user72/blog/2005/01/19/logistic-cockpit-delta-mechanism--episode-three-the-new-update-methods
    /people/sap.user72/blog/2005/04/19/logistic-cockpit-a-new-deal-overshadowed-by-the-old-fashioned-lis
    /people/sap.user72/blog/2005/02/14/logistic-cockpit--when-you-need-more--first-option-enhance-it
    Also if you can send me your email, I can send some LO extraction docs. Hope this helps.
    Thanks
    Sumit

  • Auditing a process started at boot time

    Hi All,
    I am having a problem with auditd. I am starting a process of my own at boottime that I would like to audit. I think the problem lies in that my process is not getting an auditID. If I run the process from a terminal after the system is started it audits correctly, only when started at boot will it not audit.
    I have the process's audit class defined in the audit_control file as such (md is my audit class):
    dir:/var/audit
    flags:md,lo,ad,-fw,-fm,-fd,-nt,no
    minfree:50
    naflags:md
    I thought placing it under naflags would allow it to audit without attributes, but this does not work. The audit event and audit class are also properly in place.
    Any help with getting a process to audit when started at boot would be greatly appreciated.
    Victor

    Hi,
    Processes, are not able to start at boot-time as the "processes" is a OS based concept. The closest method you can do is replace the bootloader of Windows with your bootloader then load the Windows through your bootloader, and before loading the OS your
    Bootloader can perform any calculations or start files up as soon as Windows loads. This is the only way to start processes closest to Boot time. 
    In windows vista+ OS this technique would not work because Windows Vista+ has Session separation which by design separates processes into sessions and if you were to start it as soon as the OS starts the process would crash the entire OS as the process would
    not be able to fit in any session as it started before "Winlogon.exe"
    process. Therefore it is not really possible. 

  • Plugin consuming the entire physical memory avilable

    Hello,
    I wrote a plugin using acrobat SDK to convert my pdf files to image format. When i run this plugin in a batch mode on multiple files - which are multipage files - Acrobat eats up the entire physical memory after processing certain files. I have ensured that whatever objects i acquired or allocated are freed up in the code. Still after running memory detection tools i am not able to figure out how come acrobat - or plugin - is eating up the memory available. Can someone suggest a solution to the same. Is there any way we can avoid this. If i am missing something please let me know.
    Thanks in advance.
    Regards
    hary

    >
    > reachhary
    > Aug 12, 2010
    > Dec 30, 2010 7:16 AM
    >
    > Plugin consuming the entire physical memory avilable
    >
    > Hello,
    > I wrote a plugin using acrobat SDK to convert my pdf files to image
    > format. When i run this plugin in a batch mode on multiple files -
    > which are multipage files - Acrobat eats up the entire physical memory
    > after processing certain files. I have ensured that whatever objects i
    > acquired or allocated are freed up in the code. Still after running
    > memory detection tools i am not able to figure out how come acrobat -
    > or plugin - is eating up the memory available. Can someone suggest a
    > solution to the same. Is there any way we can avoid this. If i am
    > missing something please let me know.
    > Thanks in advance.
    > Regards
    > hary
    >
    > lrosenth
    > Apr 6, 2009
    > 1. Dec 30, 2010 8:45 AM in response to: reachhary
    > Re: Plugin consuming the entire physical memory avilable
    >
    > No clue how your software works - so really can't debug it for you...
    >
    > Make sure you release each page after you image it - that's a common
    > mistake.  Otherwise, watch your memory, etc.
    >
    > You also don't mention what version of Acrobat you are using...I
    > assume you are current?!?

    We could all benefit if you post your code and take a look at it. 
    Francis Fukuyama

Maybe you are looking for

  • Problems connecting to FaceTime

    I recently purchased the IMac specifically for the Facetime feature. My wife and son tried calling me via FaceTime from our Imac to my Iphone 4, but the connection failed. Before I left for my business trip I made sure that FaceTime worked, which it

  • How to preserve albums/folders when transferring library

    I just copied my entire iPhoto library folder on my iMac to an external drive, then I transferred the library to my MacBook. I have organized my photos on my iMac into many albums and folders. That organization did not remain on the external drive or

  • Update the Asset

    Hi, How can I update the existing asset data through "FA_MASS_ADDITIONS" interface table? When I run the "POST_MASS_ADDITIONS", it gives me below error:- Request ID: 798222 Asset 3000 already exists in book Mass Addition ID: 5002 ==> ** FAILED ** Num

  • Possible to shut off iCloud SMTP Server on iPhone?

    Is it possible to use my own smtp server on my icloud account on my iphone.  Really starting to annoy me that I can't figure out how to disable the primary server and use my own.  Thanks!

  • How Drill down to a diffenrent hierarchie

    Hi all I have a geography dimension with 4 different type of hierarchies.In the business model those 4 hierarchies share the same parent(WorldWide) and the same lowest level(Store). In the presentation model I create those 4 hierarchies by dupplicati