Primary key in BM&M layer

Hi all,
I'm using OBIEE 11g. I have a simple question - what's the point and meaning of the primary key for a dimension table in the BM&M layer? What is the purpose of it? I tried to change it whit different columns but no different effect. For hierarchical tables - i think there is a big difference, but in non- hierarchical?
Thanks in advance,
Alexander.

Hi Srini,
Thanks for quick reply.
A logical level may have more than one level key. When that is the case, specify the key that is the primary key of that level. All dimension sources which have an aggregate content at a specified level need to contain the column that is the primary key of that level. Each logical level should have one level key that will be displayed when an Answers or Intelligence Dashboard user clicks to drill down. This may or may not be the primary key of the level. To set the level key to display, select the Use for drill down check box on the Level Key dialog box.What about the dimensions which doesn't have hierarchical structure - they aren't level-based or parent-child hierarchies? If i use them to just filter by them the fact tables?
If i check the consistency i will receive an error that the dimension has to have a properly defined primary key.
Thanks in advance,
Alexander.

Similar Messages

  • Setting the "primary key" possible?

    We are trying to access an existing database using Kodo JDO. A part of
    this database is old and not particularly well-designed.
    Some tables does not have a primary key, eventhough they are entities that
    need to be mapped to classes in our persistence layer. For the one table
    we are looking at currently, we have changeed a unique index on a numeric
    field to be the primary key, simply to make things work.
    The table (x) is extremely simple:
    id number
    description varchar
    We have defined the table as having application identity since the user
    wants to control the values of the id column. In fact, he would like to be
    able to update the value, as long as no other records in the system refers
    to the record via a foreign key.
    When updating the record we get a kodo.util.UserException saying that
    changing the primary key is illegal for persisted objects.
    This is of course reasonable seen from JDO, but in our old user interface,
    it was possible to change this id as long as it was not referred to.
    Is there any other way out of this than to add an "internalID" column with
    datastore identify, make that the primary key and then have the id column
    have a unique database constraint instead?? We would prefer to not change
    the database schema, but are willing to if absolutely necessary.

    Is there any other way out of this than to add an "internalID" column with
    datastore identify, make that the primary key and then have the id column
    have a unique database constraint instead?? We would prefer to not change
    the database schema, but are willing to if absolutely necessary.Well, I guess you could do the following:
    - Set some unmanaged field to the desired new pk value.
    - Use a custom class mapping (probably extending
    kodo.jdbc.meta.BaseClassMapping) to change the pk column on updates based on the
    value of the unmanaged field.
    It's not trivial, but it at least avoids schema changes. Note that it will only
    work correctly if you have very short-lived PersistenceManagers, though. Once a
    transaction that changes a pk value commits, the PM cache will be invalid; Kodo
    will still think the object has its old pk value. You also couldn't use the
    DataCache.

  • Physical Model - Primary Key changed

    I have a report called 'Service Report' running fine. This report uses multiple tables like Service,Specification etc...
    Now that the primary key of service table has changed from 'jdoid' to 'entityId' (The primary key of all the other tables also have changed).
    Wondering what is the best way to rewrite this report?
    Do I have to import 'Service' table (and all other tables) and redefine physical model,business model and presentation model from scratch?
    Is there any other strategy that anybody could think of?
    Appreciate your help.
    Thanks
    Kavi

    No you don't need to re-import. You can edit manually in the Admin tool - start with the physical layer, then the BMM layer.
    Depending on the columns and how the report is built you might not need to edit the report at all.

  • Two foreign keys reference on primary key

    There are two tables:
    1) table CALENDAR with primary key: cal_id
    2) table FACTS with some columns, two of them are dates: cal_id_start_process, cal_id_stop_process
    we want on physical layer create two foreign keys for cal_id_start_process and cal_id_stop_process columns reference on CALENDAR.cal_id
    When we create foreign key for cal_id_start_process: we choose this column and choose CALENDAR table and its primary key. But when we want to create foreign key for cal_id_stop_process - we cannot coose table CALENDAR again.
    Is this rigth, that only one column from table can reference on specified primary key in CALENDAR table?
    How to create two foreign key on one table primary key?

    More complex. In your example there'll be such SQL:
    select c.cal_id, f.cal_id_start, f.cal_id_stop
    from CALENDAR c, FACTS f
    where c.cal_id = f.cal_id_start and c.cal_id = f.cal_id_stop
    (It meens FACTS.cal_id_start = FACTS.cal_id_stop )
    In my case, I want for such join:
    select c1.cal_id, c2.cal_id, f.cal_id_start, f.cal_id_stop
    from CALENDAR c1, CALENDAR c2, FACTS f
    where c1.cal_id = f.cal_id_start and c2.cal_id = f.cal_id_stop
    Edited by: annylut on Dec 22, 2011 3:33 PM

  • OBIEE 11g: Fact table does not have  a properly defined primary key

    Hi,
    We have two fact tables and two dimension tables.
    We have joined the tables like
    D1-->F1
    D2-->F2
    D1-->D2
    We dont have any hieracies.
    It is throwing error in consistency check,
    [nQSError: 15001] Could not load navigation space for subject area ABC.
    [nQSError: 15033] Logical table Fact1 does not have a properly defined primary key.
    It is not like STAR Schema, its like snowflake schema. How to define primary key for fact table.
    Thanks.

    Hi,
    My suggestion would be bring both the facts to the same logical table sources and have a single fact table in the BMM layer joined with multiple dimensions.
    Build a dimension hierarchy for the dimensions and then in the content logical layer mapping, map the dimensions to the fact tables with detailed level/Total
    Refer the below link-
    http://108obiee.blogspot.com/2009/08/joining-two-fact-tables-with-different.html
    Hope this help's
    Thanks,
    Satya

  • Insert row with datapages/bc4j  problem with primary key

    Hi everybody,
    i tryed to insert a new row into a table with primary key (id) and therfore used a datapage, where there is a html-form and the data tag set attribute to get the data into the BC.
    I also want to use a sequence for the primary key and made a trigger to insert the key-value before insert. but in this case it doesnt work. I also tryed to get the sequence to the BC -Layer (so that i can set the PK in the View OBJ already) with a View Object but I dont know how.
    so what schould i do??
    thanxx
    Martin

    There's a know issue, when creating a record with blob, the blob content won't show until committing. The blob is added too late, which means you're in trouble with not-null fields...
    Sue, is this logged as a bug? And the dialog captions?
    K.

  • Create Primary key on View?

    Hi
    Can we create a Primary or Unique key on a simple view? while actual table doesn't have any primary or unique key
    Regards,
    Nasir.

    But what about this Example.
    CREATE VIEW emp_sal (emp_id, last_name,
    email UNIQUE RELY DISABLE NOVALIDATE,
    STRAINT id_pk PRIMARY KEY (emp_id) RELY DISABLE
    NOVALIDATE)
    AS SELECT employee_id, last_name, email FROM
    employees;This creates a view and tells Oracle that the optimizer is free to rely on email and emp_id being unique but that your code will enforce both constraints. Since you know that your data violates these constraints, that basically means that you're lying to the optimizer and allowing it to make incorrect assumptions about the data. That, in turn, permits the optimizer to either generate incorrect query plans or deliver you incorrect data should it happen to rely incorrectly on one of these constraints. Realistically, this sort of view would take an existing data quality issue and layer on a bunch of optimizer issues rather than addressing the root cause and fixing the data.
    Justin

  • Primary Key definition does not copy over to alias

    I imported a db table in physical layer and defined the key on it.
    When I create an alias of this table, the key definition is not copied over to alias. Is this standard behavior? Do I need to create primary key definition for each alias separately or am I missing something?

    Yes its standard behavior.
    Since key are generated based on joins for that object and can may not be duplicated
    Appreciate if you mark as correct
    Edited by: Veeravalli on Nov 29, 2012 1:14 PM

  • Update primary key

    Hi
    I am using weblogic 7.0 and ejb 2.0
    I have a requirement where i would want to update my primary key. If i understand right , In ejb you are not allowed to update one of the primary key column of composite primary key.
    Now if i have to update this what are the work arounds. Aslo say if i update that row through stored proc, will my ejb state in cache be in sync with latest data. Is ejb load called for sure beofre any method is called on bean.
    Please give me some inputs on this
    Thanks

    Your problem lies within your database (datasource)structure. You need to have two primary keys for that particular entity. The first primay key is internal meaning it is the true key for any specific record in the table. The second "primary key" is external meaning it is not truely a primary key with regards to the formal contstraints of your database. Instead, this is the key your users will see. It is the responsibilty of the application layer to ensure that the external id is unique. This way, you always have a lasting primary key (the internal key), yet your users can change their "primary key" whenever they like.
    As an example, I'm writing a J2EE application that provides our clients with a unique ID to access their records. My table definition would look something like:
    CREATE TABLE Client (
    clientId BIGINT AUTO_INCREMENT NOT NULL,
    externalClientId VARCHAR(15) UNIQUE NOT NULL,
    ... another field,
    ... another field,
    CONSTRAINT pk_foo PRIMARY KEY (clientId)
    The unique constraint isn't really necessary because the application layer should check for this, but it's a good safety precaution.
    On the application side, you would only expose the 'externalClientId' to the clients. This might force you to use BMP, but it keeps the user from ever changing the true Id and is fairly clean.
    Hope this helps.

  • How to create one primary key for each vendor

    Hi all
    i am doing IDOC to jdbc scenario
    i am triggering idoc from R/3 and the data is going into DB table vendor through XI.
    structures are as follows:
    sender side: ZVendorIdoc (this is a customized IDOC , if i triger IDOC for multiple vendors then it triggers only 1 idoc with multiple segment )
    Receiver side:
    DT_testVendor
        Table
            tblVendor
                action UPDATE_INSERT
                access                     1:unbounded
                    cVendorName         1
                    cVendorCode        1
                    fromdate                1
                    todate                    1
                 Key1
                    cVendorName         1
    if i trigger idoc for multiple vendors ,for example vendor 2005,2006 and 2010 . then i can see that the only key comes from the very first field (2005) and the whole record for vendor 2005,2006 and 2010  goes into the table with this(2005) as a primary key
    now again if i send data for these three vendor 2005, 2006 , 2010, in which record for the vendor 2005 is same and for 2006 and 2010 are different than it takes 2005 as a primary key and it does not update the data in the table.
    my requirement is like this:   for each vendor there should be one unique key assigned.
                                              for above said example there should come three keys one for each vendor .
    could you please help me how to do this???????????

    Hi,
      In Mapping Make the statement is 0-unbounded.For each vendor create a statement.This will solve your problem.
    Regards,
    Prakasu.M

  • Error While Deploying A CMP Entity Bean With A Composite Primary Key

    Hello all,
    I have a problem deploying CMP Entity beans with composite primary keys. I have a CMP Entity Bean, which contains a composite primary key composed of two local stubs. If you know more about this please respond to my post on the EJB forum (subject: CMP Bean Local Stub as a Field of a Primary Key Class).
    In the mean time, can you please tell me what following error message means and how to resolve it? From what I understand it might be a problem with Sun ONE AS 7, but I would like to make sure it's not me doing something wrong.
    [05/Jan/2005:12:49:03] WARNING ( 1896):      Validation error in bean CustomerSubscription: The type of non-static field customer of the key class
    test.subscription.CustomerSubscriptionCMP_1530383317_JDOState$Oid must be primitive or must implement java.io.Serializable.
         Update the type of the key class field.
         Warning: All primary key columns in primary table CustomerSubscription of the bean corresponding to the generated class test.subscription.CustomerSubscriptionCMP_1530383317_JDOState must be mapped to key fields.
         Map the following primary key columns to key fields: CustomerSubscription.CustomerEmail,CustomerSubscription.SubscriptionType. If you already have fields mapped to these columns, verify that they are key fields.Is it enough that a primary key class be serializable or all fields have to implement Serializable or be a primitive?
    Please let me know if you need more information to answer my question.
    Thanks.
    Nikola

    Hi Nikola,
    There are several problems with your CMP bean.
    1. Fields of a Primary Key Class must be a subset of CMP fields, so yes, they must be either a primitive or a Serializable type.
    2. Sun Application Server does not support Primary Key fields of an arbitrary Serializable type (i.e. those that will be stored
    as BLOB in the database), but only primitives, Java wrappers, String, and Date/Time types.
    Do you try to use stubs instead of relationships or for some other reason?
    If it's the former - look at the CMR fields.
    If it's the latter, I suggest to store these fields as regular CMP fields and use some other value as the PK. If you prefer that
    the CMP container generates the PK values, use the Unknown
    PrimaryKey feature.
    Regards,
    -marina

  • Logical standby and Primary keys

    Hi All,
    Why primary keys are essential for creating logical standby database? I have created a logical standby database on testing basis without having primary keys on most of the tables and it's working fine. I have not event put my main DB in force logging mode.

    I have not event put my main DB in force logging mode. This is because, redo log files or standby redo logfiles transforms into set of sql statements to update logical standby.
    Have you done any DML operations with nologging options and do you notice any errors in the alert.log? I just curious to know.
    But I wanted to know that, while system tablespace in hot backup mode,In the absence of both a primary key and a nonnull unique constraint/index, all columns of bounded size are logged as part of the UPDATE statement to identify the modified row. In other words, all columns except those with the following types are logged: LONG, LOB, LONG RAW, object type, and collections.
    Jaffar

  • Query to return list of all missing primary key ids from table T1

    I found this query online that returns a start and stop for a range of all missing primary key id values from table T1. However i want to rewrite this query to return a whole list of all the missing primary key ids and not a start and stop range. any help plz?
    select strt, stp
    from (select m.id + 1 as strt,
    (select min(id) - 1 from T1 x where x.id > m.id) as stp
    from T1 m left outer join T1 r on m.id = r.id - 1 where r.id is null)x where stp is not null

    with t as
              select  1 as id from dual union all
              select  2 as id from dual union all
              select  3 as id from dual union all
              select  5 as id from dual union all
              select  8 as id from dual union all
              select 10 as id from dual union all
              select 11 as id from dual union all
              select 20 as id from dual
    select  id_start + level missing_id
      from  (
             select  id id_start,
                     nullif(lead(id) over(order by id) - 1, id) id_end
               from  t
      start with id_end is not null
      connect by prior id_start = id_start
             and prior dbms_random.random is not null
             and level <= id_end - id_start
    MISSING_ID
             4
             6
             7
             9
            12
            13
            14
            15
            16
            17
            18
    MISSING_ID
            19
    12 rows selected.Or:
    with t as
              select  1 as id from dual union all
              select  2 as id from dual union all
              select  3 as id from dual union all
              select  5 as id from dual union all
              select  8 as id from dual union all
              select 10 as id from dual union all
              select 11 as id from dual union all
              select 20 as id from dual
    select  id_start + level - 1 missing_id
       from  (
              select  min(id) id_start,
                      max(id) id_end
                from  t
       connect by level <= id_end - id_start
    minus
    select  id
       from  t
    MISSING_ID
             4
             6
             7
             9
            12
            13
            14
            15
            16
            17
            18
    MISSING_ID
            19
    12 rows selected.SY.

  • Diff b/w primary key and unique key?

    what is the diff b/w primary key and unique key?

    Hi,
    With respect to functionality both are same.
    But in ABAP we only have Primary key for the Database tables declared in the Data Dictionary.
    Unique is generally is the term used with declaring key's for internal tables.
    Both primary and Unique keys can identify one record of a table.
    Regards,
    Sesh

  • Access path difference between Primary Key and Unique Index

    Hi All,
    Is there any specific way the oracle optimizer treats Primary key and Unique index differently?
    Oracle Version
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE    11.2.0.3.0      Production
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    SQL> Sample test data for Normal Index
    SQL> create table t_test_tab(col1 number, col2 number, col3 varchar2(12));
    Table created.
    SQL> create sequence seq_t_test_tab start with 1 increment by 1 ;
    Sequence created.
    SQL>  insert into t_test_tab select seq_t_test_tab.nextval, round(dbms_random.value(1,999)) , 'B'||round(dbms_random.value(1,50))||'A' from dual connect by level < 100000;
    99999 rows created.
    SQL> commit;
    Commit complete.
    SQL> exec dbms_stats.gather_table_stats(USER_OWNER','T_TEST_TAB',cascade => true);
    PL/SQL procedure successfully completed.
    SQL> select col1 from t_test_tab;
    99999 rows selected.
    Execution Plan
    Plan hash value: 1565504962
    | Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |            | 99999 |   488K|    74   (3)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| T_TEST_TAB | 99999 |   488K|    74   (3)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
           6915  consistent gets
            259  physical reads
              0  redo size
        1829388  bytes sent via SQL*Net to client
          73850  bytes received via SQL*Net from client
           6668  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          99999  rows processed
    SQL> create index idx_t_test_tab on t_test_tab(col1);
    Index created.
    SQL> exec dbms_stats.gather_table_stats('USER_OWNER','T_TEST_TAB',cascade => true);
    PL/SQL procedure successfully completed.
    SQL> select col1 from t_test_tab;
    99999 rows selected.
    Execution Plan
    Plan hash value: 1565504962
    | Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |            | 99999 |   488K|    74   (3)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| T_TEST_TAB | 99999 |   488K|    74   (3)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
           6915  consistent gets
              0  physical reads
              0  redo size
        1829388  bytes sent via SQL*Net to client
          73850  bytes received via SQL*Net from client
           6668  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          99999  rows processed
    SQL> Sample test data when using Primary Key
    SQL> create table t_test_tab1(col1 number, col2 number, col3 varchar2(12));
    Table created.
    SQL> create sequence seq_t_test_tab1 start with 1 increment by 1 ;
    Sequence created.
    SQL> insert into t_test_tab1 select seq_t_test_tab1.nextval, round(dbms_random.value(1,999)) , 'B'||round(dbms_random.value(1,50))||'A' from dual connect by level < 100000;
    99999 rows created.
    SQL> commit;
    Commit complete.
    SQL> exec dbms_stats.gather_table_stats('USER_OWNER','T_TEST_TAB1',cascade => true);
    PL/SQL procedure successfully completed.
    SQL> select col1 from t_test_tab1;
    99999 rows selected.
    Execution Plan
    Plan hash value: 1727568366
    | Id  | Operation         | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |             | 99999 |   488K|    74   (3)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| T_TEST_TAB1 | 99999 |   488K|    74   (3)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
           6915  consistent gets
              0  physical reads
              0  redo size
        1829388  bytes sent via SQL*Net to client
          73850  bytes received via SQL*Net from client
           6668  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          99999  rows processed
    SQL> alter table t_test_tab1 add constraint pk_t_test_tab1 primary key (col1);
    Table altered.
    SQL> exec dbms_stats.gather_table_stats('USER_OWNER','T_TEST_TAB1',cascade => true);
    PL/SQL procedure successfully completed.
    SQL> select col1 from t_test_tab1;
    99999 rows selected.
    Execution Plan
    Plan hash value: 2995826579
    | Id  | Operation            | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |                | 99999 |   488K|    59   (2)| 00:00:01 |
    |   1 |  INDEX FAST FULL SCAN| PK_T_TEST_TAB1 | 99999 |   488K|    59   (2)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
           6867  consistent gets
              0  physical reads
              0  redo size
        1829388  bytes sent via SQL*Net to client
          73850  bytes received via SQL*Net from client
           6668  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          99999  rows processed
    SQL> If you see here the even though statistics were gathered,
         * In the 1st table T_TEST_TAB, the table is still using FULL table access after creation of index.
         * And in the 2nd table T_TEST_TAB1, table is using PRIMARY KEY as expected.
    Any comments ??
    Regards,
    BPat

    Thanks.
    Yes, ignored the NOT NULL part.Did a test and now it is working as expected
    SQL>  create table t_test_tab(col1 number not null, col2 number, col3 varchar2(12));
    Table created.
    SQL>
    create sequence seq_t_test_tab start with 1 increment by 1 ;SQL>
    Sequence created.
    SQL> insert into t_test_tab select seq_t_test_tab.nextval, round(dbms_random.value(1,999)) , 'B'||round(dbms_random.value(1,50))||'A' from dual connect by level < 100000;
    99999 rows created.
    SQL> commit;
    Commit complete.
    SQL>  exec dbms_stats.gather_table_stats('GREP_OWNER','T_TEST_TAB',cascade => true);
    PL/SQL procedure successfully completed.
    SQL>  set autotrace traceonly
    SQL>  select col1 from t_test_tab;
    99999 rows selected.
    Execution Plan
    Plan hash value: 1565504962
    | Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |            | 99999 |   488K|    74   (3)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| T_TEST_TAB | 99999 |   488K|    74   (3)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
           6912  consistent gets
              0  physical reads
              0  redo size
        1829388  bytes sent via SQL*Net to client
          73850  bytes received via SQL*Net from client
           6668  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          99999  rows processed
    SQL>  create index idx_t_test_tab on t_test_tab(col1);
    Index created.
    SQL>  exec dbms_stats.gather_table_stats('GREP_OWNER','T_TEST_TAB',cascade => true);
    PL/SQL procedure successfully completed.
    SQL>  select col1 from t_test_tab;
    99999 rows selected.
    Execution Plan
    Plan hash value: 4115006285
    | Id  | Operation            | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |                | 99999 |   488K|    63   (2)| 00:00:01 |
    |   1 |  INDEX FAST FULL SCAN| IDX_T_TEST_TAB | 99999 |   488K|    63   (2)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
           6881  consistent gets
              0  physical reads
              0  redo size
        1829388  bytes sent via SQL*Net to client
          73850  bytes received via SQL*Net from client
           6668  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          99999  rows processed
    SQL>

Maybe you are looking for

  • Can't get sound when using M-Audio Fast Track

    I'm using Logic Express 8 and just purchased the M-Audio Fast Track Pro interface so I can plug in my guitar and mic to Logic. I downloaded the driver, the device shows up in Logic, I can even record with it, but I can't hear anything on playback! No

  • Using nokia BH 503 with gtalk

    HI, I am planning to buy a nokia BH 503. I read that we can use it to listen to musing on winamp and windows media player. Can we also use the inbuilt microphone to do voice chat with friends on gtalk and record songs? If yes can you please tell me h

  • Solaris 8 for intel x86

    Hi, i want to download the solaris 8 for x86. can any body guide me to find the download link because i can able to find the solairs 8 download link on sun website.

  • SET_REPORT_OBJECT_PROPERTY(report_id,REPORT_OTHER,cadena);

    hi i have a form that call a report, i send parameters to the report with these SET_REPORT_OBJECT_PROPERTY(report_id,REPORT_OTHER,cadena);, but when cadena is more than 1800 characters, can not run the report, can anybody tell me why? Oracle reports

  • Installing jars and classes

    I just started learning Java (I used books like "Teach yourself java 2 in 24 hours", "Teach yourself java in 21 days", "Teach yourself object oriented programming in 21 days", etc.), but there are still some things that prevent me from succeeding. I