Synchronized scope/performance confusion

I have a servlet that takes the path to an image and the desired size and resizes it and writes it back out through the response.
In it there are a couple of lines that need to be throttled or the server quickly becomes overloaded when several people are looking at different pages of thumbnails that are being rendered from large images. To do this I started playing around with synchronizing them. This caused a larger then expected performance decrease for two methods and actually increased performance in the third. So the question is what on earth is the difference between the following three examples
//the servlets doPost do get calls doPost.
    protected void doPost(HttpServletRequest request,
            HttpServletResponse response) throws ServletException, IOException {
         java2D(request, response);
EXAMPLE 1 - This was just horrible I understands it locks the whole method
     private synchronized void java2D(HttpServletRequest request, HttpServletResponse response) {
                        BufferedImage thumbImage = null; //the shrunken image
                        Image smallImage;
               Graphics2D g = thumbImage.createGraphics();
               g.drawImage(smallImage,0,0,null);
EXAMPLE 2 - This really did not behave any different then the above example.
     private void java2D(HttpServletRequest request, HttpServletResponse response) {
                        BufferedImage thumbImage = null; //the shrunken image
                        Image smallImage;
                   synchronized(this){
               Graphics2D g = thumbImage.createGraphics();
               g.drawImage(smallImage,0,0,null);
EXAMPLE 3 - This time I created a static inner class to do the work and it performs great (on our integration server). I am suspect that some how I am not preventing it from only being run once at a time but it seems that it should only run once as it is synchronized.
//The inner class
     static class ImageRender {
          public static synchronized void drawImage(BufferedImage thumbImage, Image smallImage) {
               Graphics2D g = thumbImage.createGraphics();
               g.drawImage(smallImage,0,0,null);
     private void java2D(HttpServletRequest request, HttpServletResponse response) {
                        BufferedImage thumbImage = null; //the shrunken image
                        Image smallImage;
               ImageRender.drawImage(thumbImage, smallImage);
         }Thanks for any discussion

The difference between 2) and 3) is what monitor it synchronizes on. In case 3) it synchronizes on the Class object representing the inner class. My guess is that something in your server's processes is synchronizing on the servlet instance (and is therefor being blocked).
Try creating a monitor just for this purpose as
private static Object monitor = new Object();
  synchronized(monitor) {
     Graphics2D g = thumbImage.createGraphics();
      g.drawImage(smallImage,0,0,null);
   }But you probably ought to consider being a bit more sophisticated, like allowing up to a certain number of these things to run simultaneously and, probably, caching the results.

Similar Messages

  • PI synchronous webservice performance

    - Ever stuck with the question of ways to reduce the response times of XI webservices calls??
    - Need better performnace
    - Need better response times........
    <u><b>Just Published</b></u>
    /people/community.user/blog/2007/07/09/pi-beef-up-the-performance-of-synchronous-webservices
    Cheers,
    Naveen

    Hello Michal,
    Currently direct connection is only for the adatper. Since abap proxies don't recide on adapter framework this cannot be done. But you never know. For the first time SAP is using the term Service Bus  in the presentation. This opens up a whole new space. if SAP is moving away from a central hub design to a more ESB(enterprise service bus) then there are new features to be explored.
    Cheers,
    Naveen

  • Synchronized Block - Performance Issue

    Hi All,
    In my java class contains lot of synchronized methods...which in turn leads to performance issue....
    If i try to remove the synchronized methods...it leads to deadlock problem...
    Is there a way without removing the synchronized methods..to improve the performance...
    Please suggest any solution

    In my java class contains lot of synchronized methods...which in turn leads to performance issue....It causes sequentialization of critical sections of code so that they will execute correctly. You can't describe that as a performance problem unless you can show that a faster and correct implementation exists. It might: for example you could make yor synchronized blocks smaller, use concurrent data structures, etc. But what you can't do is compare it to the same code without synchronization and say it's slower. It is, but the observation has no meaning as the unsynch version isn't correct.
    If i try to remove the synchronized methods...it leads to deadlock problem...That isn't possible unless you didn't remove them all. Deadlocks result from acquiring locks in different orders.
    Is there a way without removing the synchronized methods..to improve the performance...Almost certainly. Post some code.

  • Synchronized session performance issue with Linux

    Hi,
    I have some code (in a Struts Action) that streams out a BufferedImage object from a user's session object and then removes it.
    I remove the BufferedImage object from session in a synchronized block.
    When testing on my Windows 2000 laptop performance is fantastic, however, when I scale up to our Linux test servers, it is much slower, 5 secs. compared to 1 sec. before.
    Both platforms running identical JVMs and versions of Tomcat - anyone else experience this?
    Best Regards

    The fact that the block is synchronized should not impact performance unless multiple clients access the same session (and I don't see how that could happen). What object are you synchronizing on?
    How big is the BufferedImage? How are you testing the application? You say you are streaming the image as output -- do you test with a web browser pointing at localhost in both cases? A large BufferedImage being transferred over the ethernet as opposed to the local loopback would certainly cause the extra delay you're seeing. What are the specs of your Win2k laptop and Linux test servers? You really need to do more investigation before you start making claims like "Java is slower on Linux than Windows" (which is effectively what you're suggesting).

  • Performance problems with use SCOPE instruction?

    Hi All!
    We have application work with cube bases on SSAS 2008 R2. Also we use writeback function for change data in cube.
    Now I'm looking bottleneck in our queries.
    We have following MDX query(for example):
    select
    non empty{
    ([Date].[Date].[All].children
    , [Forecast Type].[Forecast Type].[All].children
    , [Stock].[Stock].[All].children
    , [Shipment].[Shipment].[All].children
    , [Invoice].[Invoice].[All].children
    , [Way Bill External].[Way Bill External].[All].children
    , [SD User].[User].[All].children
    , [SD Date].[Date].[All].children
    , [CD User].[User].[All].children
    , [CD Date].[Date].[All].children
    , [Forecast Basis].[Forecast Basis].[All].children
    , [Orders].[Orders].[All].children
    , [Rolling Forecast].[Rolling Forecast].[All].children
    , [Long Range Forecast].[Long Range].[All].children
    , [Calculated FCCR].[Calc Price].[All].children
    , [Write Table Guids].[GuidObj].[All].children)
    } dimension properties member_unique_name
    , member_type
    , member_value on rows
    , non empty {({[Measures].[Price CR]
    , [Measures].[Cost]
    , [Measures].[Cost USD]
    , [Measures].[Cost LME]
    , [Measures].[Cost MWP]
    , [Measures].[Weight]
    , [Measures].[Weight Real]})} dimension properties member_unique_name
    , member_type
    , member_value on columns
    from [MainCubeFCT]
    where ({[Currency].[Currency].&[4]}
    , {[Forecast Basis].[Customer].&[4496]}
    , {[Forecast Basis].[Consignee].&[4496]}
    , {[Forecast Condition].[Forecast Condition].&[1]}
    , {[Forecast Basis].[Alloy].&[56]}
    , {[Date].[Year Month].[Month].&[2015-05-01T00:00:00]}
    , {[Date Type].[Date Type].&[2]}
    , {[Forecast Basis].[Business Sphere2].&[4]}
    , {[Forecast Status].[Forecast Status].&[2]})
    Duration execution this query(Query end):
    cold(after clear cache) - 1000
    warm - 500
    Max loss on Calculate Non empty event - 95%.
    After some operations I found bottleneck in 2 measures: [Measures].[Weight], [Measures].[Price CR]
    If them deleted from query then duration execution equals 50.
    In our cube measure [Measures].[Weight] override in calculation as: 
    scope([Measures].[Weight]);
    This = iif((round([Measures].[Weight], 3)<>0), round([Measures].[Weight], 3), null);
    end scope;
    But if I change code as 
    scope([Measures].[Weight]);
    This = [Measures].[Weight];
    end scope;
    Performance query does not improve...
    If delete this override from calculation in cube. I get good performance acceptable to me.
    We need to keep the business logic and get acceptable performance.
    What wrong in measures, calculations or query? Any ideas?
    If need additional information let me know.
    Many thanks, Dmitry.

    Hi Makarov,
    According to your description, you get performance issue when using SCOPE() statement. Right?
    In Analysis Services, when using SCOPE() statement, it redefines that part of your cube space while the calculated member is much more isolated. In this scenario, I suggest you directly create a measure. Because calculated measure only returns values
    where the amounts are recorded directly to the parent without including children values.
    Reference:
    Analysis Services Query Performance Top 10 Best Practices
    Top 3 Simplest Ways To Improve Your MDX Query
    Best Regards,
    Simon Hou
    TechNet Community Support

  • Performance In Simple Scenarios

    I have done some performance testing to see if asynchronous triggers performs any better than synchronous triggers in a simple audit scenario -- capturing record snapshots at insert, update and delete events to a separate database within the same instance of SQL Server.
    Synchronous triggers performed 50% better than asynchronous triggers; this was with conversation reuse and the receive queue activation turned off, so the poor performance was just in the act of forming and sending the message, not receiving and processing.  This was not necessarily surprising to me, and yet I have to wonder under what conditions would we see real performance benefits for audit scenarios.
    I am interested if anyone has done similar testing, and if they received similar or different results.  If anyone had conditions where asynchronous triggers pulled ahead for audit scenarios, I would really like to hear back from them.  I invite any comments or suggestions for better performance.
    The asynchronous trigger:
    Code Snippet
    ALTER TRIGGER TR_CUSTOMER_INSERT ON DBO.CUSTOMER
    FOR INSERT AS
    BEGIN
      DECLARE
        @CONVERSATION UNIQUEIDENTIFIER ,
        @MESSAGE XML ,
        @LOG_OPERATION CHAR(1) ,
        @LOG_USER VARCHAR(35) ,
        @LOG_DATE DATETIME;
      SELECT TOP(1)
        @CONVERSATION = CONVERSATION_HANDLE ,
        @LOG_OPERATION = 'I' ,
        @LOG_USER = USER() ,
        @LOG_DATE = GETDATE()
      FROM SYS.CONVERSATION_ENDPOINTS;
      SET @MESSAGE =
      ( SELECT
          CUST_ID = NEW.CUST_ID ,
          CUST_DESCR = NEW.CUST_DESCR ,
          CUST_ADDRESS = NEW.CUST_ADDRESS ,
          LOG_OPERATION = @LOG_OPERATION ,
          LOG_USER = @LOG_USER ,
          LOG_DATE = @LOG_DATE
        FROM INSERTED NEW
        FOR XML AUTO );
      SEND ON CONVERSATION @CONVERSATION
        MESSAGE TYPE CUSTOMER_LOG_MESSAGE ( @MESSAGE );
    END;
    The synchronous trigger:
    Code Snippet
    ALTER TRIGGER TR_CUSTOMER_INSERT ON DBO.CUSTOMER
    FOR INSERT AS
    BEGIN
      DECLARE
        @LOG_OPERATION CHAR(1) ,
        @LOG_USER VARCHAR(15) ,
        @LOG_DATE DATETIME;
      SELECT
        @LOG_OPERATION = 'I' ,
        @LOG_USER = USER() ,
        @LOG_DATE = GETDATE()
      INSERT INTO SALES_LOG.DBO.CUSTOMER
      SELECT
        CUST_ID = NEW.CUST_ID ,
        CUST_DESCR = NEW.CUST_DESCR ,
        CUST_ADDRESS = NEW.CUST_ADDRESS ,
        LOG_OPERATION = @LOG_OPERATION ,
        LOG_USER = @LOG_USER ,
        LOG_DATE = @LOG_DATE
      FROM INSERTED NEW
    END;

    Synchronous audit has to do one database write (one insert). Asynchronous audit has to do at least an insert and an update (the SEND)  plus a delete (the RECEIVE) and an insert (the audit itself), so that is 4 database writes. If the destination audit service is remote, then the sys.transmission_queue operations have to be added (one insert and one delete). So clearly there is no way asynchronous audit can be on pair with synchronous audit, there are at least 3 more writes to complete. And that is neglecting all the reads (like looking up the conversation handle etc) and all the marshaling/unmarshaling of the message (usually some fairly expensive XML processing).
    Within one database the asynchronous pattern is apealing when the trigger processing is expensive (so that the extra cost of going async is negligible) and reducing the original call response time is important. It could also help if the audit operations create high contention and defering the audit reduces this. Some more esoteric reasons is when asynchronous processing is desired for architecture reasons, like the posibility to add a workflow triggered by the original operation and desire to change this workflow on-the-fly without impact/down time (eg. more consumers of the async message are added, the message is schreded/dispatched to more processing apps and triggers more messages downstream etc etc).
    If the audit is between different databases even within same instance then the problem of availabilty arrises (audit table/database may be down for intervals, blocking the orginal operations/application).
    If the audit is remote (different SQL Server isntances) then using Service Broker solves the most difficult problem (communication) in addition to asynchronicity and availability, in that case the the synchrnous pattern (e.g. using a linked server) is really a bad choice.

  • GeoRaster performance: Compressed vs Uncompressed

    I tried to read compressed and uncompressed GeoRaster. The different in the performance confused me. I expected better performance for compressed raster, because Oracle needs to read in few times less data from hard drive (1:5 in my case). However, reading uncompressed data is approximately twice faster. I understand Oracle needs to use more CPU for uncompressing data. But I thought that saved time of reading data would be more than time for uncompressing a raster.
    Did anybody compare the performance?
    Thanks,
    Dmitry.

    Dmitry,
    You can try for yourself. QGIS is a free-open-source-software.
    QGIS uses GDAL to access raster and vector data and there is a plugin called "Oracle Spatial GeoRaster", or just oracle-raster, to deal with GeoRaster. To access Geometries you don't need to activate the plugin, just select Oracle as your database "type" in the Add Vector Layer dialog box.
    Displaying GeoRaster work pretty fast, as long as you have created pyramids. Yes, there is a little delay when the GeoRaster is compressed but that is because GDAL request the data to be uncompressed and QGIS has no clue about it.
    Wouldn't be nice to have a viewer that used the JPEG as it is?
    Regards,
    Ivan

  • MPOS Creation

    Hi colleagues
    We plan to use DP and GATP  in APO for FMCG industry.
    For which i have a following Questions:
    Level for Forecasting:
    Product, Product Group, Plant, Province are required.Forecasting is done in monthly level.
    Expected combinations = approximetly  9000 to 11000 CVC's
    For Forecasting we plan to use keyfigure fixing at Product and Plant Level.We  need to maintain aggregate at Product and Plant.
    For GATP Check and Allocation Check:
    Product, Product Group, Plant, Province,9AKNOB and Customer  are required.Allocations are done in weekly Level.
    Expected Combinations = 69000 CVC's
    In our process, Statistical forecast is also one of the input to decide allocation.
    For allocations we plan to do fixing at aggregate level at Product and Province Level.We  need to maintain aggregate at Product and Province.
    For designing MPOS:
    Option 1: Separate MPOS for Forecasting and Allocation
    Pros: 1) Performance increases for forecasting bcoz less number of CVC's since no customers in forecasting.
    2) Only single  Aggregate maintenance to the forecasting MPOS and Allocation MPOS
    Cons: 1) CVC duplication with 2 MPOS
    2) Data realignment need to handled twice.
    Option 2: Single MPOS for forecasting and Allocation
    What is your recommendations on my requirement.
    BR
    Katerine

    Hi Katerine,
    I too think that seperate MPOS may be a better choice.
    Just couple of points.....
    'In our process, Statistical forecast is also one of the input to decide allocation."
    For this point, how do you plan to have forecast updated for the CVCs for MPOS- GATP ?
    Generating in this GATP  MPOS itself or copying from the other MPOS ?
    Please also note that 69K total CVCs is not a huge number to me, we have operated in volumes of CVCs which were  10-15 times more.
    To have many additional  background jobs/ process chains means addtional dependencies, monitoring , scope for confusions , more BW info objects ( say back up cube, extraction cube, hsitory cubes).
    Regards
    Datta

  • AI/AO at different frequency

    Hi,
    As a newbie, I met a problem when I tried to input and output analog signal at different frequency.
    I followed PID-control-Multichannel.vi to build a control program, so input/output can be synchronized. However, the project requires that the AI frequency to be ten times of the AO. I could rewrite the while loop to make the output value constant for 9 of 10 cycles. However, I believe there is more straight forward way to do it.
    Could anybody provide an example?
    Thank you in advance.
    Sincerely yours
    Ming 
    Solved!
    Go to Solution.

    lmuri wrote:
    Hi,
    As a newbie, I met a problem when I tried to input and output analog signal at different frequency.
    I followed PID-control-Multichannel.vi to build a control program, so input/output can be synchronized. However, the project requires that the AI frequency to be ten times of the AO. I could rewrite the while loop to make the output value constant for 9 of 10 cycles. However, I believe there is more straight forward way to do it.
    Could anybody provide an example?
    Thank you in advance.
    Sincerely yours
    Ming 
    Hello Ming!
    Thank you for using the NI Forums. You'll be glad to know that DAQmx allows I/O tasks such as these to be ran not only concurrently but also at different rates.
    The problem with the solution you've devised is that this implementation will remove the delegation of the tasks down to the hardware level and your program would become software driven; this becomes problematic when running data acquisition tasks at very high speeds as you become limited to the output speed of your Operating System (OS).
    You can co-ordinate your tasks to operate synchronously and perform output and acquisition at different rates by creating a task master. This generally means that you configure a task through DAQmx that maintains a clock frequency and you create tasks which use this clock frequency, or a division of it, to operate at their own individual frequency. This will ease not only the implementation of synchronous DAQmx tasks but also provide an entirely hardware driven solution to maximimse performance.
    Through LabVIEW, if you go to Help > Find Examples to open the NI Example Finder. If you browse through Hardware Input and Output > DAQmx > Synchronization > Multi-Function > Multi-Function-Synch Dig Read Write With Counter.vi, you will find an example of how to configure a Counter as a task master to control the operation of both a Read and Write operation. (This example shows a digital implementation but may be easily replaced with analogue.)
    By setting the counter rate to the maximum frequency that you will require for your task (In this case, the speed at which you want to output values) and applying it to the output task SampleClock, you will drive the output task clock with the Counter as the clock source. You can then use the Counter as the source for the SampleClock for the input task, however set the rate to whatever division of the driving frequency you want. In the case of your example, you can set the input rate to 0.1 times the Counter Frequency to acquire at a 10th of the rate.
    If you wanted to acquire at the same rate but only retrieve values at the 10th of the speed, this same solution could be configured to instead produce a trigger to return a buffered acquisition. With a master clocking task, the opportunities are endless!
    I hope that you find this helpful, and if you need any more clarifcation don't hesitate to let me know. Have fun with your DAQ!
    Alex Thomas, University of Manchester School of EEE LabVIEW Ambassador (CLAD)

  • Ni 5122: Use of functions that manipulate attributes in NISCOPE

    HI, all
    I would like to first thank  Alan L for responding to my last message. It was helpful.
    I am currently using ni 5122 in sampling data sets and EACH set consists of  400 triggered records
    and each record contains 1024 points (So this 1024 X 400 matrix will constitute a single image).
    The sampling rate is 33 Mhz(There is a reason for choosing this sampling frequency, plz do not
    suggest me to increase the sampling frequency as a solution).
    Since the trigger occurs at 10 KHz, it will take 40 milliseconds to acquire
    a data set which corresponds to a single frame of image.
    I am trying to configure my program ( I am using VC++) such a way that I fetch the data
    from on-board memory of digitizer to main memory of host computer and  perform DSP
    on each triggered record while sampling rather than waiting for the entire data set (1024 X 400) to be collected.
    The frequency of the trigger signal is 10 kHz, meaning that I have 100 usec for each triggered
    record. Since I am using approximately 31 usec to sample the data, I have about 69 usec of idling
    period bewteen each triggered record. So, I have attempted to utilize those idling period.
    I have looked at "Acquiring data continuously" section of  "High Speed Digitizer Help" manual.
    From there, I found out that I can fetch triggered records while sampling is still going on.
    The manual suggests me to play with the following attributes.
    NISCOPE_ATTR_FETCH_RECORD_NUMBER
    NISCOPE_ATTR_FETCH_NUM_RECORDS
    with the family of
    niScope_SetAttributeXXX and niScope_GetAttributeXXX functions.
    I have attempted to change value of those attributes but
    got the following error.
    "The channel or repeated capability  name is not allowed." This error also occured
    when I attempted to just READ! (The functions I mentioned above appear immediately
    before niScope_InitiateAcquisition function in my prog.)
    I have also looked at the accompanying c example codes to remedy this,
    but found a strange thing in the code. Within the example which uses
    niScope_SetAttributeViInt32, the parameter channelList is set to VI_NULL
    instead of "0", "1" or "0,1".  Why?
    As I mentioned earlier, I can get a single frame of image every 40 millisec
    ( 25 frame/sec), if everything works as I planned.  Without fetching portion of
    codes, my program currently generates about 20 frame/sec but when I include
    the fetching codes, the frame rate decreases to 8 frame/sec. 
    If anybody has a better idea of reducing fetching time than the one I am using,
    please help me.
    Godspeed
    joon

    I would like to thank you (Brooks W.)  for the reply.
    I think I have stated that " 'my program'  generates 20 fps if fetching portion of code is omitted."  As I have mentioned earlier I am developing own app. S/W using VC++.
    I am already using niScope_FetchBinary16 which you have suggested in your reply.
    Here is a full disclosure of issues I am experiencing when fetching triggered records from 5122. I initially wrote a simple code which runs in int main() function and profiled the  time used to fetch data using niScope_FetchBinary16. The rate was  23.885714 million samples/sec. However, when I integrated the exact same piece of code to my Win32 app., the rate has gone down to 8.714891 million samples/sec. My PCI link is running at 33 Mhz so the PCI clearly has nothing to do with this problem
    I have been looking through NI Discussion forum to find an answer for this and found a person (look at jim_monte's thread "Improving NI-SCOPE performance ") who is experiencing a similar kind of problem. He noticed while executing his program that what appears to be unnecessary DLLs are being loaded.
    Is my problem caused by something that jim_monte suggests or do you have any other explanation to my issue?

  • Execute CALL TRANSACTION in the background....

    Hi,
    I want to use CALL TRANSACTION in a report program and execute this report in the background.
    There is not GUI_UPLOAD / GUI_DOWNLOAD used anywhere.
    Can someone suggest me what precaution I need to take in my code for CALL TRANSACTION?
    Is there any additional code for background processing?
    Thanks.

    hi,
    this is the sample code:
    Precations need to take :
    1. see that u r transfer tha data in correct field, on correct screen, and of correct format.
    2. capture the error logs in bdcmsgcoll - error logs have to be handle.
    3. ensure correct recording.
    in selection screen you can mention in which mode u want :
    A     Display all screens
    E     Display errors
    N     Background processing
    P     Background processing; debugging possible
    This is the sample code:
    & Report  ZKO01_BDC                                                       &
    & Object Id       :                                                       &
    & Object Name     : ZKO01_BDC                                             &
    & Program Name    : ZKO01_BDC                                             &
    & Transaction Code: ZKO01_BDC                                             &
    & Module Name     : FI / CO                                               &
    & Program Type    : BDC Program      Create Date     : 23.06.2008         &
    & SAP Release     : 6.0              Transport No    :                    &
    & Description     : BDC to upload internal order with indernal assignment &
    & Version         : 1.0.                                                  &
    & Changed on      :                                                       &
    report zko01_bdc
           no standard page heading line-size 255.
    types: begin of record,
            auart_001(004),
            ktext_002(040),
            bukrs_003(004),
            werks_004(004),      " ADDED NEW - RAHUL SHINDE
            scope_005(010),
            prctr_006(004),      " ADDED NEW - RAHUL SHINDE
            waers_007(005),
            astkz_008(001),
            plint_009(001),
          end of record.
    types: begin of ty_stats,
           mess(72) type c,
           auart_001(004),
           text(18) type c,
           end of ty_stats.
    data : it_record type table of record,
            wa_record like line of it_record.
    data: bdcdata type table of bdcdata,
          mestab type table of bdcmsgcoll.
    data : stats type table of ty_stats.
    data: opt type ctu_params.
    data: m(72) type c.
    data : fl_name type string.
    data :  wa_bdcdata like line of bdcdata,
            wa_mestab like line of mestab.
    data :  wa_stats like line of stats.
    data:   ctumode like ctu_params-dismode.
    data:   cupdate like ctu_params-updmode.
    data: file type  rlgrap-filename.
    data: xcel type table of alsmex_tabline with header line.
    data: mod1(1) type c.
    initialization.
    opt-dismode = 'A'.
    opt-updmode = 'S'.
    opt-nobinpt = 'X'.   "No batch input mode
                        Selection Screen
    selection-screen begin of block bk1 with frame.
    selection-screen skip 1.
    parameters p_file type localfile. " default 'D:\Common\PWC\Asset BDC\Book2.xls'.
    parameters p_mode like ctu_params-dismode obligatory.
    selection-screen skip 1.
    selection-screen end of block bk1.
    file = p_file.
    mod1 = p_mode.
    at selection-screen on value-request for p_file.
      call function 'KD_GET_FILENAME_ON_F4'
           exporting
                static    = 'X'
           changing
                file_name = p_file.
                        Selection Screen
    start-of-selection.
    file = p_file.
    ctumode = mod1.
    cupdate = 'L'.
      call function 'ALSM_EXCEL_TO_INTERNAL_TABLE'
           exporting
                filename                = file
                i_begin_col             = '1'
                i_begin_row             = '1'
                i_end_col               = '100'
                i_end_row               = '5000'
           tables
                intern                  = xcel
           exceptions
                inconsistent_parameters = 1
                upload_ole              = 2
                others                  = 3.
    loop at xcel.
      case xcel-col.
        when '0001'.
            wa_record-auart_001 = xcel-value.      "ok
        when '0002'.
            wa_record-ktext_002 = xcel-value.      "ok
        when '0003'.
            wa_record-bukrs_003 = xcel-value.      "ok
        when '0004'.
            wa_record-werks_004 = xcel-value.      "ok
        when '0005'.
            wa_record-scope_005 = xcel-value.      "ok
       when '0005'.
           wa_record-KTEXT_005 = xcel-value.    "ok
        when '0006'.
            wa_record-prctr_006 = xcel-value.      "ok
        when '0007'.
            wa_record-waers_007 = xcel-value.      "ok
        when '0008'.
            wa_record-astkz_008 = xcel-value.      "ok
        when '0009'.
            wa_record-plint_009 = xcel-value.      "ok
      endcase.
      at end of row.
        append wa_record to it_record.
        clear wa_record.
      endat.
    endloop.
    loop at it_record into wa_record.
    perform bdc_dynpro      using 'SAPMKAUF' '0100'.
    perform bdc_field       using 'BDC_CURSOR'
                                  'COAS-AUART'.
    perform bdc_field       using 'BDC_OKCODE'
                                  '/00'.
    perform bdc_field       using 'COAS-AUART'
                                  wa_record-auart_001.
    perform bdc_dynpro      using 'SAPMKAUF' '0600'.
    perform bdc_field       using 'BDC_OKCODE'
                                  '=BUT2'.
    perform bdc_field       using 'COAS-KTEXT'
                                  wa_record-ktext_002.
    perform bdc_field       using 'BDC_CURSOR'
                                  'COAS-SCOPE'.
    perform bdc_field       using 'COAS-BUKRS'
                                  wa_record-bukrs_003.
    perform bdc_field       using 'COAS-WERKS'
                                  wa_record-werks_004.
    perform bdc_field       using 'COAS-SCOPE'
                                  wa_record-scope_005.
    perform bdc_field       using 'COAS-PRCTR'
                                  wa_record-prctr_006.
    perform bdc_dynpro      using 'SAPMKAUF' '0600'.
    perform bdc_field       using 'BDC_OKCODE'
                                  '=SICH'.
    *perform bdc_field       using 'COAS-KTEXT'
                                 wa_record-KTEXT_005.
    perform bdc_field       using 'BDC_CURSOR'
                                  'COAS-PLINT'.
    perform bdc_field       using 'COAS-WAERS'
                                  wa_record-waers_007.
    perform bdc_field       using 'COAS-ASTKZ'
                                  wa_record-astkz_008.
    perform bdc_field       using 'COAS-PLINT'
                                  wa_record-plint_009.
    call transaction 'KO01' using bdcdata
                            options from opt
                            messages into mestab.
    *PERFORM loggs.
    clear wa_record.
    refresh bdcdata.
    endloop.
    end-of-selection.
    clear : wa_stats.
    if stats is initial.
        write :/ text-001.
    else.
      loop at stats into wa_stats.                         "displays runtime messages
        write:/ 'MESSAGE  :',wa_stats-auart_001.
        if wa_stats-auart_001 is not initial.
        write:/ wa_stats-auart_001,  wa_stats-text.
        endif.
        skip 1.
      endloop.
    endif.
    *&  FORMS BDC_DYNPRO
    form bdc_dynpro using program dynpro.
      clear wa_bdcdata.
      wa_bdcdata-program  = program.
      wa_bdcdata-dynpro   = dynpro.
      wa_bdcdata-dynbegin = 'X'.
      append wa_bdcdata to bdcdata..
    endform.
      FORM BDC_FIELD                                                 *
    form bdc_field using fnam fval.
        clear wa_bdcdata.
        wa_bdcdata-fnam = fnam.
        wa_bdcdata-fval = fval.
        append wa_bdcdata to bdcdata..
    endform.
    *&      Form  loggs
          text
    -->  p1        text
    <--  p2        text
    form loggs .
    loop at mestab into wa_mestab.
        if wa_mestab-msgtyp = 'E'.
          call function 'FORMAT_MESSAGE'
            exporting
              id        = wa_mestab-msgid
              lang      = 'E'
              no        = wa_mestab-msgnr
              v1        = wa_mestab-msgv1
              v2        = wa_mestab-msgv2
              v3        = wa_mestab-msgv3
              v4        = wa_mestab-msgv4
            importing
              msg       = m
            exceptions
              not_found = 1
              others    = 2.
          wa_stats-mess = m.
          wa_stats-text = text-001.            "'Not Created'.
          wa_stats-auart_001 = wa_record-auart_001.
          "wa_stats-sernr = wa_flat-sernr.
          append wa_stats to stats.
        elseif wa_mestab-msgtyp = 'S'.
          call function 'FORMAT_MESSAGE'
            exporting
              id        = wa_mestab-msgid
              lang      = 'E'
              no        = wa_mestab-msgnr
              v1        = wa_mestab-msgv1
              v2        = wa_mestab-msgv2
              v3        = wa_mestab-msgv3
              v4        = wa_mestab-msgv4
            importing
              msg       = m
            exceptions
              not_found = 1
              others    = 2.
          if wa_mestab-dyname = 'SAPMIEQ0'
                                    and wa_mestab-dynumb = '0101'
                                    and wa_mestab-msgspra = 'E'
                                    and wa_mestab-msgid = 'IS'
                                    and wa_mestab-msgnr = '144'.
            loop at stats into wa_stats where auart_001 = wa_record-auart_001.
                                          "and sernr = wa_flat-sernr.
                 delete stats.
            endloop.
                clear : wa_stats.
                wa_stats-mess = m.
                append wa_stats to stats.
          endif.
        endif.
        clear : wa_stats.
      endloop.
    endform.                    " loggs
    Edited by: Naseeruddin on Nov 26, 2008 8:57 AM

  • Are Bridge and Mini Bridge and Camera Raw part of PS?

    1.) Are Bridge and Mini Bridge and Camera Raw part of PS? OR do they also come with other programs in the Adobe software suite?
    2.) What is the difference between Bridge and Mini Bridge?
    Thanks.

    1.) The  Bridge application is included in the Photoshop installer, as well as  some other Adobe products.(more info:  http://www.adobe.com/products/creativesuite/bridge/?promoid=GWELP). The  Mini Bridge extension is only included in Photoshop CS5, and InDesign  CS5. The Photoshop Camera Raw plug-in is shared by Photoshop, and  Bridge (and is also found in Photoshop Elements).
    2.) Bridge is a separate application, whereas Mini Bridge is a panel  that's hosted by the Photoshop or InDesign application. Bridge has a fuller feature set than Mini Bridge and can be used by itself. Mini Bridge needs Bridge to create thumbnails, keep files  synchronized, and perform other  tasks, but operates in the context of the Ps or ID host application (ie. drag/drop multiple assets into working document w/o losing document view).
    Online version of Mini Bridge Help: http://help.adobe.com/en_US/creativesuite/cs/using/WS4bebcd66a74275c33c28e88f1235296fe93-8 000.html
    regards,
    steve

  • Trouble hitting breakpoint in User Exit Include

    Greetings,
    I am having a problem where I cannot get a transaction to stop at a breakpoint that I have set in a user exit include program (<b>ZXEDFU02</b> starting from transaction <b>VF04</b>) and I cannot figure out why.  I am sure that the exit code is being executed because of the results, and if I put a syntax error in this include and activate and execute, i receive a short dump at the syntax error - evidence that the program is in the include code. 
    I have even tried switching on system and update debugging just to be sure, but no matter what I cannot hit this exit.  The exit for those unfamiliar allows the addition of additional data to segments of the INVOICE02 IDOC in SD.
    Thank you in advance for any help you can provide.
    Geoff

    Hi Geoff,
    The problem most probably lies with the fact that in default mode VF04 does not perform the update logic synchronously (for performance reasons).  The only thing that is performed synchronously is the selection of the data, the bundling of the data into work packets, and the submission of these work packets for execution. 
    Thus, you do not hit your break point for debugging (the actual code gets executed either in a background or update work process).  You will, however, get short dumps if the synchronous section of the program uses any function module in the function group in which you have introduced syntax errors (hence how you can get short dumps but still not stop at the break point).
    The way to solve this is right down the bottom of the VF04 transaction you have an option for Update:
    Asynchronous
    Synchronous via VB log
    Synchronous w/o VB log
    By default it is set to the first option.  If you choose either option 2 or 3, you should then run the update logic synchronously and stop at your breakpoint.
    If this still doesn't work, you have still another thing to try.  Before hitting execute, put in /H at the okcode.  When you hit execute it will throw you into debug mode.  Hit the settings button (far right) and flag the option 'In background task: do not process', and then continue.  Any background processes started will be thrown up into another session in debug mode.  Go to this other session and hit continue.  You SHOULD stop at your breakpoint if it is being processed as a background job.
    If this still doesn't work then you have some options for update debugging.  Let me know and I'll see if I can help.
    Cheers,
    Brad

  • Itunes reboot computer when trying to import music

    I had to redo my os on my computer which is a Dell Dimension 4600. After getting my information back on the system and installing itunes I try to import my music and it reboots my system after a few seconds of importing. There isn't any blue screen or anything. I thought maybe it was version 10 causing the issue so I dropped down to a earlier version of 9 and when itunes opens its fine...however when I go to file and import folder, after a few seconds the computer reboots. Please help!

    Hmmm ... let's try heading back to Dell, plugging in your Service tag in the following page (don't tell me what the service tag is):
    http://support.dell.com/support/downloads/index.aspx
    ... and updating with the rest of the updates available for you there. (Identifying your model via the service tag will just display the updates relevant to your particular configuration, so it'll cut down on any scope for confusion.)
    Were there any additional updates available for you? If so, and if you update, does that help with the rebooting behavior?

  • How to decide how big my ZIL and Cache device should be?

    Hi all,
    I have a multiple LUN connected to my server with different size.
    1.
    If I want to add ZIL to the pool:
         How to calculate the ZIL (size) to fit different Pool size?
         it is better to mirror the ZIL? If the ZIL devices is from the SAN storage?
    2.
    Same as Cache device, how big  should I give?
    3.
    Cache device is recommended to use SSD. However, if my SSD is not the local SSD but it is a SSD in the SAN storage which is the same storage of the pool. Is it useless to give a cache device to the pool? As it limited to the fiber channel throughput.

    Hi,
    Good questions and to recap these performance features:
    Separate log devices (ZIL) are good for improving synchronous write performance
    Separate cache devices (L2ARC) are good for improving read performance
    In general, we recommend the following:
    1. Use SSDs for both ZIL or cache otherwise you won't see the performance boosts when using HDDs.
    2. The cache device size should equal your application's warm working set size.
    3. General log sizing recommendations are here:
    Creating and Destroying ZFS Storage Pools - Oracle Solaris 11.1 Administration: ZFS File Systems
    Creating a ZFS Storage Pool With Log Devices
    A more specific case is for the Oracle db redo log, where the recommendation is 15 seconds of redo
    log activity x 2 or 300 MB.
    4. Local attached SSDs as log or cache devices will perform much better than if they are attached
    through a SAN array.
    5. Mirrored log devices are recommended, but unnecessary for cache devices.
    Thanks, Cindy

Maybe you are looking for