Locking on the aggregated records

Hello BPS Experts,
if the level contains the product group PG1. and the infocube records are product level P1,P2. during manual planning using the manual layout. are the records containing the P1 and P2 are locked ?
Suggestions appreciated.
Thanks,
BWer

Hello BWer,
read the correponding How To Guide 'Locking of Transaction Data':
https://websmp109.sap-ag.de/bi
-> Services and Implementation
-> How To ... Guides
-> How To ... Guides BPS
Regards,
Gregor

Similar Messages

  • Lock in the results recording in QE51

    Hi,I use a BAPI to do results recording in QE51,but when someone inside the characteristic,results recording does not happen, and there is no error message appears.What should I do to deal with the problem? Should I use 'lock object' method? I know nothing about 'lock', would you please tell me the method in detail?
    For example,the inspection lot number is: 10000006313, the operation number is:0080 and the inspection characteristic number is:0010, I am inside this characteristic,but I want to do results recording using a BAPI.Results recording doesn't happan,and there is no error message appears.How can I deal with the problems.Many thanks!

    CALL FUNCTION 'ENQUEUE_EQAVO'

  • Record sorting using integer properties in aggregated records

    This is regarding a clarification on how sorting works across within rolled up endeca records . We roll up endeca records to represent the complete record (product). We have price as a property available with every record and when rolled up along with price sorted (low to high), with in the aggregated record we get the least price. When we apply price sorting (high to low), we get highest price with in the aggregated product.
    we are looking for is the least price with in the aggregated record to be used for sorting High to Low. Any ideas if this can be done.
    Below is the example of how the data looks at record level.
    P1
    R1  3.35
    R2  3.25
    R3  2.85
    P2
    R1  2.35
    R2  3.45
    P3
    R1  3.45
    R2  4.65
    R3  3.25
    R4  3.95
    Currently Low to High brings (P2-2.35,P1-2.85,P3-3.25)
    Currently High to Low brings (P3-4.65,P2-3.45, P1-3.35)
    Expected  ---> High to Low (P3-3.25, P1-2.85, P2-2.35) The reverse order of current low to high

    Check the class java.util.Collections and its method sort().
    You may also want to have a look at java.util.Arrays.
    Søren

  • KP06 Cost Center Budget Planning System error when locking the data records

    Hi,
    While updating Cost Center Planning system(KP06) its giving the below error:
    System error when locking the data records.
    Message no. KI502
    Diagnosis
    The lock to protect the data records being processed could not be set. The
    probable reason for this is that the SAP locking table is full and no more
    entries can be added.
    Procedure
    Inform your system administrator immediately
    No planning data has been changed
    Message no. K8038
    Diagnosis
    You used Post. While preparing the data for posting, the SAP System
    determined that no changes were made in the available databank values.
    System Response
    A posting activity price is not necessary
    Please help me how can we rectify the above error..
    Thanks
    VS Rao

    Hi,
    check the locking entries (t-code SM12).
    http://help.sap.com/saphelp_erp2004/helpdata/en/37/a2e3ae344411d3acb00000e83539c3/frameset.htm
    Best regards, Christian

  • Aggregation level ZFI_FINGL is locked by the Change and Transport Organizer

    Dear,
    After transport from a dev to prod system I get this message on the planning modeler :
    "System is not set to changeable - objects are not changeable
    Aggregation level ZFI_FINGL is locked by the Change and Transport Organizer"
    Do you know how to fix it ?
    Thanks in advance,
    Thibaut

    Hi,
    By default the planning modeller configuration is added to the standard BEx transport (if it is not saved in $TMP). If there is not one set up then create one in the admin workbench transport connection.
    If you don't use this, then you can add the objects you wish to change manually into a transport before you change them - this also works.
    Cheers
    Mark

  • Aggregation level  is locked by the Change and Transport Organizer

    Dear All,
                   I have created aggregation level in the planning cube and transported to Production from dev, Once i transported to production i am not able to change the aggregation level in Dev, and also aggregation level is Inactive.  Aggregation level YPNC2 is locked by the Change and Transport Organizer. Please suggest how to activate the aggregation level in Dev.
    Thanks
    Christopher

    Hi,
      Kindly check the following,
    1. Is your aggregation level transported properly to the production system ?
    2. Was your aggregation level active while you were including it in the transport request ?
    3. Did you properly include the aggregation level in the request?
    If you find that the answers of all the above questions is "YES", then there should not be any problem in your aggregation level.
    Hope this solves your issue, if not kindly get back to me.
    Regards,
    Balajee

  • In an SAP Table is to possible to perfrom lock at the record level?

    Hi All,
      In an SAP Table/Ztable is to possible to perfrom lock at the record level?
    Is it possible to increease the size of the sap table or z-table to insert more records.
    For example I want to insert 50000 records into a z-table and the size category I have given as 0 means which can hold some 15thousand records.
    then what abt the remaining recors?
    how can I inser tthem?
    do I need to increase the manually or it will be done automatically?
    Could any one please explain this?
    Thanks in Advance.
    Regards.
    Abhilash.

    hi,
    u can insert no of  records into table based on ur size category.
    check these.
    0                0   to         1,200
    1            1,200 to         4,900
    2            4,900 to        19,000
    3           19,000 to        78,000
    4           78,000 to       310,000
    5          310,000 to       620,000
    6          620,000 to    25,000,000
    to lock records check this data.
    Lock mode
    Defines how to synchronize table record access by several users.
    The following modes exist:
    Exclusive lock
    The locked data can be read or processed by one user only. A request for another exclusive lock or for a shared lock is rejected.
    Shared lock
    Several users can read the same data at the same time, but as soon as a user edits the data, a second user can no longer access this data. Requests for further shared locks are accepted, even if they are issued by different users, but exclusive locks are rejected.
    Exclusive but not cumulative lock
    Exclusive locks can be requested by the same transaction more than once and handled successively, but an exclusive but not cumulative lock can only be requested once by a given transaction. All other lock requests are rejected.
    reward points if hlpful.

  • I am getting an Narration error that I can't record because one of the record-enabled tracks is locked in the timeline. What does this mean and how do I get out of it??

    I am getting an Narration error that I can't record because one of the record-enabled tracks is locked in the timeline. What does this mean and how do I get out of it??

    JohnL23
    Thanks for the update.
    There are several Adobe Premiere Elements Forum threads about situation such as the one that you have encountered. Typically they are the results of some heavy activity in the narration track.  Some have found success in maneuvers between the Expert and Quick workspaces in versions 11 and 12 or Timeline and Sceneline workspaces in versions earlier than 11. That is why I mentioned that type of factor in a prior post in your thread.
    If that is not working, then we are forced into the new project where the problems in the current project are not presenting.
    I will do a search for the threads that I am recalling about your type of issue. But, right now all roads seem to head to the new project. But I would ask "Can you salvage this project by creating your narration clips in Audacity or a new Premiere Elements project and import them into this project, putting the clips on one of the numbered audio tracks?"
    If the problem should reappear in the new project, please let us know including the details of what was going on immediate before the problem surfaced.
    Thanks.
    ATR

  • HT4061 How do I abort an itunes download.  my phone is locked on the itunes emblem.

    My grandson accidentaly deleted something he had recorded on his phone and he was trying to retrieve it on itunes.  It failed and now his phone is locked on the iturnes emblem with a picture of the charger cord pointing toward the itunes emble,.  I have turned off and on and it is still locked in place??//

    The phone is gone into Recovery mode. You need to connect it to iTunes and restore.

  • Multiple User Accessing the same record issue

    I am planning to design an app where we have the following use case requirement.
    If a user who is logged into the system is accessing a record(plan in this case) anyone else who is logged into the system at the same time should be locked out of that same plan but should still be able to access other plans in the system. A plan has many things associated with it so the 2nd user should be locked out of everything associated to the plan being accessed by the first user.
    What is the best way to implement this at the application or the database level?
    Here are some options we have been bouncing around.
    1. When the first user logs in and accesses the first plan we lock the plan at the app level using a singleton class which has one and only one instance on the app server. The plan_id can be put as an entry into a hashtable which can be in the session and is created if one does not exist. When the 2nd user tries to access the same plan, since the plan_id is still in the hashtable he would be locked out. However we somehow need to timeout the first user after 30 mts of inactivity or so so that others can access the plan and are not locked out for ever if the first user walks away from his PC or does not close his browser, thus keeping his session alive indefinitely.
    2. In the database in the plan table we add a column for 'locked'. When the first entry is created in the plan table locked column is marked as 'yes' or 1 and when the user closes the browser we use some javascript to trigger an event which changes that 'yes' or 1 to 'no' or 0 thus unlocking the plan. However the big issue we see in this concept is that we will have to put a javascript onUnload method in all jsp pages in the app because the user could be anywhere in the app after starting his plan access after login.
    Conceptually the 2 options are the same but one is done at the app whereas the other is at the database level.
    Is there a better way to handle this scenario using transactions or some other technological option.
    Thanks

    Another solution involving no modification of the database structure:
    As soon as a user want to access a plan, try to UPDATE the plan record... if it fails, the record was locked
    by another user before. When the user has finished with the plan, you can COMMIT or ROLLBACK the changes, which will free the lock for other users.
    An advantage of this solution is that if program crashes unexpectedly, there will automatically be a ROLLBACK.
    Of course, you need a transaction for this... and perhaps more if you want to separate the 'locking transaction' (virtual update just for restricting access) from the 'operating transaction' (in which you will
    do the DB stuff: inserts, updates, deletes, etc.)
    Hope this helped,
    Regards.

  • Is there a way to BULK COLLECT with FOR UPDATE and not lock ALL the rows?

    Currently, we fetch a cursor on a few million rows using BULK COLLECT.
    In a FORALL loop, we update the rows.
    What is happening now, is that we run this procedure at the same time, and there is another session running a MERGE statement on the same table, and a DEADLOCK is created between them.
    I'd like to add to the cursor the FOR UPDATE clause, but from what i've read,
    it seems that this will cause ALL the rows in the cursor to become locked.
    This is a problem, as the other session is running MERGE statements on the table every few seconds, and I don't want it to fail with ORA-0054 (resource busy).
    What I would like to know is if there is a way, that only the rows in the
    current bulk will be locked, and all the other rows will be free for updates.
    To reproduce this problem:
    1. Create test table:
    create table TEST_TAB
    ID1 VARCHAR2(20),
    ID2 VARCHAR2(30),
    LAST_MODIFIED DATE
    2. Add rows to test table:
    insert into TEST_TAB (ID1, ID2, LAST_MODIFIED)
    values ('416208000770698', '336015000385349', to_date('15-11-2009 07:14:56', 'dd-mm-yyyy hh24:mi:ss'));
    insert into TEST_TAB (ID1, ID2, LAST_MODIFIED)
    values ('208104922058401', '336015000385349', to_date('15-11-2009 07:11:15', 'dd-mm-yyyy hh24:mi:ss'));
    insert into TEST_TAB (ID1, ID2, LAST_MODIFIED)
    values ('208104000385349', '336015000385349', to_date('15-11-2009 07:15:13', 'dd-mm-yyyy hh24:mi:ss'));
    3. Create test procedure:
    CREATE OR REPLACE PROCEDURE TEST_PROC IS
    TYPE id1_typ is table of TEST_TAB.ID1%TYPE;
    TYPE id2_typ is table of TEST_TAB.ID2%TYPE;
    id1_arr id1_typ;
    id2_arr id2_typ;
    CURSOR My_Crs IS
    SELECT ID1, ID2
    FROM TEST_TAB
    WHERE ID2 = '336015000385349'
    FOR UPDATE;
    BEGIN
    OPEN My_Crs;
    LOOP
    FETCH My_Crs bulk collect
    INTO id1_arr, id2_arr LIMIT 1;
    Forall i in 1 .. id1_arr.COUNT
    UPDATE TEST_TAB
    SET LAST_MODIFIED = SYSDATE
    where ID2 = id2_arr(i)
    and ID1 = id1_arr(i);
    dbms_lock.sleep(15);
    EXIT WHEN My_Crs%NOTFOUND;
    END LOOP;
    CLOSE My_Crs;
    COMMIT;
    EXCEPTION
    WHEN OTHERS THEN
    RAISE_APPLICATION_ERROR(-20000,
    'Test Update ' || SQLCODE || ' ' || SQLERRM);
    END TEST_PROC;
    4. Create another procedure to check if table rows are locked:
    create or replace procedure check_record_locked(p_id in TEST_TAB.ID1%type) is
    cursor c is
    select 'dummy'
    from TEST_TAB
    WHERE ID2 = '336015000385349'
    and ID1 = p_id
    for update nowait;
    e_resource_busy exception;
    pragma exception_init(e_resource_busy, -54);
    begin
    open c;
    close c;
    dbms_output.put_line('Record ' || to_char(p_id) || ' is not locked.');
    rollback;
    exception
    when e_resource_busy then
    dbms_output.put_line('Record ' || to_char(p_id) || ' is locked.');
    end check_record_locked;
    5. in one session, run the procedure TEST_PROC.
    6. While it's running, in another session, run this block:
    begin
    check_record_locked('208104922058401');
    check_record_locked('416208000770698');
    check_record_locked('208104000385349');
    end;
    7. you will see that all records are identified as locked.
    Is there a way that only 1 row will be locked, and the other 2 will be unlocked?
    Thanks,
    Yoni.

    I don't have database access on weekends (look at it as a template)
    suppose you
    create table help_iot
    (bucket number,
    id1    varchar2(20),
    constraint help_iot_pk primary key (bucket,id1)
    organization index;not very sure about the create table syntax above.
    declare
      maximal_bucket number := 10000; -- will update few hundred rows at a time if you must update few million rows
      the_sysdate date := sysdate;
    begin
      truncate table help_iot;
      insert into help_iot
      select ntile(maximal_bucket) over (order by id1) bucket,id1
        from test_tab
       where id2 = '336015000385349';
      for i in 1 .. maximal_bucket
      loop
        select id1,id2,last_modified
          from test_tab
         where id2 = '336015000385349'
           and id1 in (select id1
                         from help_iot
                        where bucket = i
           for update of last_modified;
        update test_tab
           set last_modified = the_sysdate
         where id2 = '336015000385349'
           and id1 in (select id1
                         from help_iot
                        where bucket = i
        commit;
        dbms_lock.sleep(15);
      end loop;
    end;Regards
    Etbin
    introduced the_sysdate if last_modified must be the same for all updated rows
    Edited by: Etbin on 29.11.2009 16:48

  • Suggestion for not picking up the same record

    Hi All,
    I have a plsql procedure that picks up the records from the same table from multiple threads. Different threads should not pick up the same record.
    Current Logic:
    I am locking the row that has been picked up the current thread and the next thread should pick up the next row but unfortunately i have a problem here, The inner query will pick up the same record until the status of that particular row is changed since I am picking them up in a FIFO order.
    I am not able to add the lock row in the inner query since oracle does not support it. Is there any other way to do it simpler?
    XXRM_ARM_HEADER
    header_id
    service_id
    XXRM_ARM_LINES
    line_id
    header_id
    status
    Query
    SELECT dl.header_id, dl.line_id
    FROM xxrm_arm_header dh, xxrm_arm_lines dl
    WHERE dh.header_id = dl.header_id
    AND dh.service_id = 4
    AND dl.status = 'REQUEST_RECEIVED'
    AND dl.line_id = (SELECT LINE_ID
    FROM ( SELECT dl.line_id
    FROM xxrm_arm_lines dl,
    xxrm_arm_header dh
    WHERE dh.header_id = dl.header_id
    AND dl.status = 'REQUEST_RECEIVED'
    AND dh.service_id = 4
    ORDER BY dl.line_id ASC)
    WHERE ROWNUM = 1)
    FOR UPDATE OF dl.status NOWAIT SKIP LOCKED

    Robert Angel wrote:
    forgive me if I am naive, but why wouldn't each thread updating its selected rows with a nowait and if exception pick the next range mechanism work?It is not that simple. You can get race conditions between threads as they process the same rows in the same order attempting to "beat" one another by being the first to lock it. The more threads there are, the potentially worse this situation.
    There's also the issue of performance. The thread concept (aka parallel processing) in this respect has a single primary aim. Increase performance and scalability. But does it?
    How is I/O reduced when 50% or more of the reads done by the thread (to find a row to process) is wasted I/O as rows being read have already been locked by other processes/threads?
    The fact remains that if you give the same set of rows to a bunch of threads (e.g. DBMS_JOB processes) for processing, they will contend for the same rows. There will be overheads. There will be wasted I/O.
    So what I am suggesting is each thread; -
    1. Look for rows unprocessed, I usually use atribute field in e-Business suite for this purpose
    2. Attempt to update them with nowait, to INPROCESS
    3. If 2 fails, try the next range - repeat until 2 is possible or end of rangeAnd step 3 is the one that will waste I/O and waste time - as the time it spend looking for a row to process could have been spend on actually processing a row.
    4. You could also improve this by specialising each thread to have its own preferences, if there exists a mechanism that would mean fair distribution between the threads..Yes, and this is a key factor to removing contention between threads, reducing their I/O overheads and reducing their time being spend on finding unprocessed rows to lock and process.
    Then there's Oracle technical issues. On 11gr2 for example, despite using skip locked or nowait (that cannot be used together in a single clause like in 10g), I'm seeing deadlocks when threads contend for the same rows. Same code works fine in 10g without deadlocks. So the approach one chooses need careful testing on that specific Oracle version to ensure it behaves as expected and meets the performance requirements.
    Bottom line is that parallel processing is not as straight forward as simply slapping a nowait clause onto a select statement in order to skip locked rows.

  • Lock specific number of records using ENQUEUE & DEQUEUE

    Hi,
    Is it possible to lock a group of records in R/3?
    My requirement is to update a set of records in VBAP table. I'm not using a BAPI here. Instead, I use a direct UPDATE.
    In this case, i know i can lock individual records by passing VBELN and POSNR. But what if i have to lock 10 records?
    Is this possible in any way?
    Thanks in advance.
    The current solution is:
    1) LOOP at ITAB
    2) LOCK each entry
    3) UPDATE VBAP for that entry
    4) UNLOCK the entry
    5) Endloop
    I thought this solution might work: (Assume 10 records are present in ITAB)
    1) LOOP at ITAB (Lock all 10 entries)
    2) LOCK that entry
    3) ENDLOOP
    4) UPDATE VBAP from ITAB (Updates all 10 entries in one databae access)
    5) LOOP at ITAB(Unlock all 10 entries)
    6) UNLOCK that entry
    7) ENDLOOP
    Any help will be appreciated.
    Tabraiz.

    Hello,
    Both of your solutions will work.
    With solution 1 there will always be only 1 enqueue object created, because you always enqueue, perform the update and dequeue.
    This means that in SM12 you will only see 1 enqueue entry on your user ID at the same time when your program runs.
    Solution 2 is also possible but there you will have different enqueue objects that will be created, because you enqueue everything, then perform the updates and then dequeue everything.
    In SM12 (lock entries) this will result in more enqueue records on your user ID the time your program runs.
    You have to pay attention that lock entries (SM12) are stored in a queue that is limited, so make sure with solution 2 that you don't overflow the enqueue queue ! ! !
    Via tcode RZ11 you can check parameter enque/table_size (Size of lock table).
    Check the parameter value but also its documentation and you will understand why you should limit the number of open lock records.
    Success.
    Wim Van den Wyngaert

  • Unauthorized User can see the aggregated result?

    Dear all,
    I got a problem that, unauthorized user can see the aggregated result.
    e.g. A user who is allowed to read data of Sales Office (0SALES_OFF) 0001 only. (It is done in the RSECADMIN already)
    Now a cube contains Sales records of Sales office 0001 & 0002 :
    Sales Office 0001  Sales Value 3000
    Sales Office 0002  Sales Value 2000
    When the user execute a query from the cube without any selection criteria, it give the aggregated Sales Value 5000.
    But if the user try to drill down for each sales office, it can then give error, because the user are not suppose to see Sales Office 0002 records.
    What I expect is an error at the very beginning when the user execute the query, saying the cube contains data that the user is not suppose to see, the user need to put in the correct Sales Office as criteria first.
    Is there anyone can help on this issue?? I have no idea how can this happened?
    Many Thanks!!!
    Chris

    Hi,
    Now I understood your concern correctly. Unfortunately, I do not have prompt answer for you. So, User should be able to view the aggregated result, but not to irrelevant sales office. If you don't have condition on Aggregated Result, restricting to individual Sales Office can be done in RSECADMIN.
    Let's wait for other's opinions.......
    Regards,
    Suman

  • Split the incoming records into partitions

    Guru's,
    I have a table with around 17,000 records each day to get processed through informatica. All these records are distinct in every attribute and the status of the records will be in 'PENDING_CLEAR'.
    My requirement is to read these records into 8 partitions (if possible equal partitions) so that i could run in 8 partitions in informatica.
    I have the below options available in informatica
    1. Pass through
    2. Key range and
    3. Database partioning.
    Option 2 and Option 3 are not possible, since the data base is not partitioned and i will not be able to provide the key ranges, since the primary key on the table is an auto-increment number.
    In pass through i will be reading the same 17000 records across all the pipelines and its a challenge to handle it in infomatica.
    Any suggestions of paritioning the records using sql??
    Thanks a lot for your time.

    >
    Yes. But the way informatica has got options to read and process the record it is complicating more. I am trying to sort the issue related to informatica.
    >
    And that reinforces what I said above: you are likely 'misusing' Informatica if you have that sort of problem.
    A middle-tier application like Informatica does not provide a 'one size fits all' solution without making (in some cases serious) performance trade-offs.
    Such tools function best when they act as the 'middle man' among multiple databases. Need to do some basic data manipulation and source data from both Oracle and DB2? A tool can access both DBs with separate connections and help you merge or filter the data. The tool can also assist in performing some basic data transformations.
    Need to sort data and apply complex business rules? That work should be done in the database.
    The primary goal, when working with multiple database sources or targets, is the do AS MUCH work as possible in the DB; that is what the DB is designed for.
    Get the data INTO the DB as quickly and efficiently as possible and then do the work there. Mid-tier tools can be very effective at that. They can source data from multiple systems, do basic data cleansing and filtering to reject/fix/report 'dirty' data and they can consolidate multiple data streams to feed ONE target stream.
    Tools such as informatica use a proprietary language and syntax. DBs like Oracle and DB2 base their syntax on well-established ANSI standards. There is NO comparison.
    Versions of Informatica that I worked with didn't even allow you to access the code very easily. The code for individual businsess rules was locked into the objects it was attached to (some flexibility using maps and maplets). A developer has to 'open' the object in order to get access to the actual code that was being use.
    The PL/SQL code used in DBs is readily accessible and easy to access.
    The first question I would ask about your issue is: can this work (processing each record) be done in the database. If the answer is yes then that is most likely where the work should be being done. The 'tool' should then be used to do as much 'preprocessing' of the data as possible and then get the cleansed data into the database (into staging tables) as quickly as possible.
    If you insist on going the way you are headed you will need to add code to 'chunk' the data.
    See my reply if this thread for a description of how to use the NTILE function to do that
    Re: 10g: parallel pipelined table func - distributing DISTINCT data sets
    And for more discussion of why you should NOT be going this way here is a thread from last year with a question very similar to yours also using Informatica
    Looking for suggestion on splitting the output  based on rowid/any other
    >
    I have below query.I need to split total rows pulled by the query into two halves by maintaining data accuracy.
    Eg if query returns 500 rows,i need a query which return 250 another with 250.
    Reason :
    I have a informatica ETL job which pull data by above query and updates into target.Run time is 2hrs
    In order to reduce the run time,using 2 pass through partitions which will run the query in two partitions and run time will be reduced to exact 1 hour.
    >
    Sound familiar? Read what the responders (including me) had to say.

Maybe you are looking for

  • JDeveloper 10.1.3 erorrs when trying to run any page i.e. JSP,HTML,Servlet

    Hi, I downloaded JDeveloper v10.1.3.0.4(SU3). When I try to run a simple jsp page I get the following error. Any ideas JDev 10.1.2 is far better than 10.1.3 I am lots of problems with new version. I tried doenlaoding 3 times but everytime this error

  • Final cut pro Media Browser

    Final cut Pro All I see in my media browser is Itunes. What happened to all the audioclips which are coming with fCP. I use Mavericks and FCP 10.1 gsall

  • Help! mac to windows.

    i wasn't aware of the no windows->mac->windows music transfer stipulation. my ipod is configured for windows...but recently i added music onto it from a mac. in the meantime i had added more music to my windows itunes. is there any way to sync the ne

  • All the sudden can't connect to mail server

    This morning I stopped receiving email. Mail says it can't connect and that the connection timed out. I use IMAP with SSL (port 993). I don't know of anything I could have changed on the MacBook to cause this. As well, my iPhone can't connect either

  • ERROR GENERAL EXCEPTION

    HI What is the solution for that error ? General Exception : java.lang.ClassNotFoundException: y.po.0      at sun.applet.AppletClassLoader.findClass(Unknown Source)      at java.lang.ClassLoader.loadClass(Unknown Source)      at sun.applet.AppletClas