Your suggestion needed: Implementing large scale knowledgebase

Hi,
I am attempting to create a large scale knowledgebase (its size in the short term wont be huge - but it needs to have the capability of vastly increasing in size).
The information I will be storing is information extracted from documents on Wikipedia. For example, given the sentence:
"Tropical Storm Allison was a tropical storm that devastated southeast Texas"
I want to to break the information down into facts such as "Tropical Storm -> Allison", "Allison -> devastated southeast Texas".
My question is, what would be the best implementation to use to implement the knowledgebase?
I have two main ideas at present:
1. A realtively simple, single table MySQL database (however trying to determine how to insert all the facts I encounter into the same columns is an issue I am yet to resolve)
2. A type of Tree for each page (subject) I extract information from. E.g for the above example:
Tree Root -> "Tropical Storm"
Child Node of Tree Root -> "Allison"
Child of node Allison -> "devastated"
Child of node devastated -> "southeast Texas"
At present I am thinking the Tree opton may be a better way to implement the knowledgebase. However if I am going to represent each wikipedia page (subject) as an individual Tree I would need a way of storing them. I.e. in an object database or other method?
Firstly does anyone know how I may save each tree, so when its needed I could retrieve and search it for facts?
Secondly does anyone know of a better way I could implement such a system?
Thanks for your time and thoughts!

Your real question appears to be how do I save a "tree" type stucture in a relational database. And the answer is, it's pretty simple.
If your tree is simple enough (by which I mean that each parent may have many children but each child only will have one parent) then you can do it all in one table.
tblNode
id int primary key
parentid int (foreign key to itself.. sort of)
name varchar
so then for your example you would have
id parentid name
1 0 Tropical Storm
2 1 Allison
3 2 devasted southeast Texas
4 0 Hurricane
So then you would have a tree with Tropical Storm and Hurricane at the top. Underneath Tropical Storm is Allison. Underneath Allison is devasted...
So you can just query for the "root" nodes with
SELECT id, name FROM tblNode WHERE parentid=0
And then to get the child nodes for a particular node
SELECT id,name FROM tblNode WHERE parentid=1
etc
If you are going to have situations where the child has more than one parent then you need two tables.

Similar Messages

  • Query performance tuning need your suggestions

    Hi,
    Below is the sql query and explain plan which is taking 2 hours to execute and sometime it is breaking up( erroring out) due to memory issue.
    Below it the query which i need to improve the performance of the code please need your suggestion in order to tweak so that time take for execution become less and also in less memory consumption
    select a11.DATE_ID DATE_ID,
    sum(a11.C_MEASURE) WJXBFS1,
    count(a11.PKEY_GUID) WJXBFS2,
    count(Case when a11.C_MEASURE <= 10 then a11.PKEY_GUID END) WJXBFS3,
    count(Case when a11.STATUS = 'Y' and a11.C_MEASURE > 10 then a11.PKEY_GUID END) WJXBFS4,
    count(Case when a11.STATUS = 'N' then a11.PKEY_GUID END) WJXBFS5,
    sum(((a11.C_MEASURE ))) WJXBFS6,
    a17.DESC_DATE_MM_DD_YYYY DESC_DATE_MM_DD_YYYY,
    a11.DNS DNS,
    a12.VVALUE VVALUE,
    a12.VNAME VNAME,
    a13.VVALUE VVALUE0,
    a13.VNAME VNAME0,
    9 a14.VVALUE VVALUE1,
    a14.VNAME VNAME1,
    a15.VVALUE VVALUE2,
    a15.VNAME VNAME2,
    a16.VVALUE VVALUE3,
    a16.VNAME VNAME3,
    a11.PKEY_GUID PKEY_GUID,
    a11.UPKEY_GUID UPKEY_GUID,
    a17.DAY_OF_WEEK DAY_OF_WEEK,
    a17.D_WEEK D_WEEK,
    a17.MNTH_ID DAY_OF_MONTH,
    a17.YEAR_ID YEAR_ID,
    a17.DESC_YEAR_FULL DESC_YEAR_FULL,
    a17.WEEK_ID WEEK_ID,
    a17.WEEK_OF_YEAR WEEK_OF_YEAR
    from ACTIVITY_F a11
    join (SELECT A.ORG as ORG,
    A.DATE_ID as DATE_ID,
    A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
    A.DATE_HOUR_ID as DATE_HOUR_ID,
    A.TASK as TASK,
    A.PKEY_GUID as PKEY_GUID,
    A.VNAME as VNAME,
    A.VVALUE as VVALUE
    FROM W_ORG_D A join W_PERSON_D B on
    (A.TASK = B.TASK AND A.ORG = B.ID
    AND A.VNAME = B.VNAME)
    WHERE B.VARIABLE_OBJ = 1 ) a12
    on (a11.PKEY_GUID = a12.PKEY_GUID and
    a11.DATE_ID = a12.DATE_ID and
    a11.ORG = a12.ORG)
    join (SELECT A.ORG as ORG,
    A.DATE_ID as DATE_ID,
    A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
    A.DATE_HOUR_ID as DATE_HOUR_ID,
    A.TASK as TASK,
    A.PKEY_GUID as PKEY_GUID,
    A.VNAME as VNAME,
    A.VVALUE as VVALUE
    FROM W_ORG_D A join W_PERSON_D B on
    (A.TASK = B.TASK AND A.ORG = B.ID
    AND A.VNAME = B.VNAME)
    WHERE B.VARIABLE_OBJ = 2) a13
    on (a11.PKEY_GUID = a13.PKEY_GUID and
    a11.DATE_ID = a13.DATE_ID and
    a11.ORG = a13.ORG)
    join (SELECT A.ORG as ORG,
    A.DATE_ID as DATE_ID,
    A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
    A.DATE_HOUR_ID as DATE_HOUR_ID,
    A.TASK as TASK,
    A.PKEY_GUID as PKEY_GUID,
    A.VNAME as VNAME,
    A.VVALUE as VVALUE
    FROM W_ORG_D A join W_PERSON_D B on
    (A.TASK = B.TASK AND A.ORG = B.ID
    AND A.VNAME = B.VNAME)
    WHERE B.VARIABLE_OBJ = 3 ) a14
    on (a11.PKEY_GUID = a14.PKEY_GUID and
    a11.DATE_ID = a14.DATE_ID and
    a11.ORG = a14.ORG)
    join (SELECT A.ORG as ORG,
    A.DATE_ID as DATE_ID,
    A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
    A.DATE_HOUR_ID as DATE_HOUR_ID,
    A.TASK as TASK,
    A.PKEY_GUID as PKEY_GUID,
    A.VNAME as VNAME,
    A.VVALUE as VVALUE
    FROM W_ORG_D A join W_PERSON_D B on
    (A.TASK = B.TASK AND A.ORG = B.ID
    AND A.VNAME = B.VNAME)
    WHERE B.VARIABLE_OBJ = 4) a15
    on (a11.PKEY_GUID = a15.PKEY_GUID and
    89 a11.DATE_ID = a15.DATE_ID and
    a11.ORG = a15.ORG)
    join (SELECT A.ORG as ORG,
    A.DATE_ID as DATE_ID,
    A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
    A.DATE_HOUR_ID as DATE_HOUR_ID,
    A.TASK as TASK,
    A.PKEY_GUID as PKEY_GUID,
    A.VNAME as VNAME,
    A.VVALUE as VVALUE
    FROM W_ORG_D A join W_PERSON_D B on
    (A.TASK = B.TASK AND A.ORG = B.ID
    AND A.VNAME = B.VNAME)
    WHERE B.VARIABLE_OBJ = 9) a16
    on (a11.PKEY_GUID = a16.PKEY_GUID and
    a11.DATE_ID = a16.DATE_ID and
    A11.ORG = A16.ORG)
    join W_DATE_D a17
    ON (A11.DATE_ID = A17.ID)
    join W_SALES_D a18
    on (a11.TASK = a18.ID)
    where (a17.TIMSTAMP between To_Date('2001-02-24 00:00:00', 'YYYY-MM-DD HH24:MI:SS') and To_Date('2002-09-12 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
    and a11.ORG in (12)
    and a18.SRC_TASK = 'AX012Z')
    group by a11.DATE_ID,
    a17.DESC_DATE_MM_DD_YYYY,
    a11.DNS,
    a12.VVALUE,
    a12.VNAME,
    a13.VVALUE,
    a13.VNAME,
    a14.VVALUE,
    a14.VNAME,
    a15.VVALUE,
    a15.VNAME,
    a16.VVALUE,
    a16.VNAME,
    a11.PKEY_GUID,
    a11.UPKEY_GUID,
    a17.DAY_OF_WEEK,
    a17.D_WEEK,
    a17.MNTH_ID,
    a17.YEAR_ID,
    a17.DESC_YEAR_FULL,
    a17.WEEK_ID,
    a17.WEEK_OF_YEAR;
    Explained.
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 1245 | 47 (9)| 00:00:01 |
    | 1 | HASH GROUP BY | | 1 | 1245 | 47 (9)| 00:00:01 |
    |* 2 | HASH JOIN | | 1 | 1245 | 46 (7)| 00:00:01 |
    |* 3 | HASH JOIN | | 1 | 1179 | 41 (5)| 00:00:01 |
    |* 4 | HASH JOIN | | 1 | 1113 | 37 (6)| 00:00:01 |
    |* 5 | HASH JOIN | | 1 | 1047 | 32 (4)| 00:00:01 |
    |* 6 | HASH JOIN | | 1 | 981 | 28 (4)| 00:00:01 |
    | 7 | NESTED LOOPS | | 1 | 915 | 23 (0)| 00:00:01 |
    | 8 | NESTED LOOPS | | 1 | 763 | 20 (0)| 00:00:01 |
    | 9 | NESTED LOOPS | | 1 | 611 | 17 (0)| 00:00:01 |
    | 10 | NESTED LOOPS | | 1 | 459 | 14 (0)| 00:00:01 |
    | 11 | NESTED LOOPS | | 1 | 307 | 11 (0)| 00:00:01 |
    | 12 | NESTED LOOPS | | 1 | 155 | 7 (0)| 00:00:01 |
    | 13 | NESTED LOOPS | | 1 | 72 | 3 (0)| 00:00:01 |
    | 14 | TABLE ACCESS BY INDEX ROWID| W_SALES_D | 1 | 13 | 2 (0)| 00:00:01 |
    |* 15 | INDEX UNIQUE SCAN | CONS_UNQ_W_SALES_D_SRC_ID | 1 | | 1 (0)| 00:00:01 |
    | 16 | TABLE ACCESS BY INDEX ROWID| W_DATE_D | 1 | 59 | 1 (0)| 00:00:01 |
    |* 17 | INDEX UNIQUE SCAN | UIDX_DD_TIMSTAMP | 1 | | 0 (0)| 00:00:01 |
    | 18 | TABLE ACCESS BY INDEX ROWID | ACTIVITY_F | 1 | 83 | 4 (0)| 00:00:01 |
    |* 19 | INDEX RANGE SCAN | PK_ACTIVITY_F | 1 | | 3 (0)| 00:00:01 |
    |* 20 | TABLE ACCESS BY INDEX ROWID | W_ORG_D      | 1 | 152 | 4 (0)| 00:00:01 |
    |* 21 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
    |* 22 | TABLE ACCESS BY INDEX ROWID | W_ORG_D | 1 | 152 | 3 (0)| 00:00:01 |
    |* 23 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
    |* 24 | TABLE ACCESS BY INDEX ROWID | W_ORG_D | 1 | 152 | 3 (0)| 00:00:01 |
    |* 25 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
    |* 26 | TABLE ACCESS BY INDEX ROWID | W_ORG_D | 1 | 152 | 3 (0)| 00:00:01 |
    |* 27 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
    |* 28 | TABLE ACCESS BY INDEX ROWID | W_ORG_D | 1 | 152 | 3 (0)| 00:00:01 |
    |* 29 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
    |* 30 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
    |* 31 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
    |* 32 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
    |* 33 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
    |* 34 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
    -----------------------------------------------------------------------------------------------------------------------

    Hi,
    I'm not a tuning expert but I can suggest you to post your request according to this template:
    Thread: HOW TO: Post a SQL statement tuning request - template posting
    HOW TO: Post a SQL statement tuning request - template posting
    Then:
    a) you should posting a code which is easy to read. What about formatting? Your code had to be fixed in a couple of lines.
    b) You could simplify your code using the with statement. This has nothing to do with the tuning but it will help the readability of the query.
    Check it below:
    WITH tab1 AS (SELECT a.org AS org
                       , a.date_id AS date_id
                       , a.time_of_day_id AS time_of_day_id
                       , a.date_hour_id AS date_hour_id
                       , a.task AS task
                       , a.pkey_guid AS pkey_guid
                       , a.vname AS vname
                       , a.vvalue AS vvalue
                       , b.variable_obj
                    FROM    w_org_d a
                         JOIN
                            w_person_d b
                         ON (    a.task = b.task
                             AND a.org = b.id
                             AND a.vname = b.vname))
      SELECT a11.date_id date_id
           , SUM (a11.c_measure) wjxbfs1
           , COUNT (a11.pkey_guid) wjxbfs2
           , COUNT (CASE WHEN a11.c_measure <= 10 THEN a11.pkey_guid END) wjxbfs3
           , COUNT (CASE WHEN a11.status = 'Y' AND a11.c_measure > 10 THEN a11.pkey_guid END) wjxbfs4
           , COUNT (CASE WHEN a11.status = 'N' THEN a11.pkey_guid END) wjxbfs5
           , SUM ( ( (a11.c_measure))) wjxbfs6
           , a17.desc_date_mm_dd_yyyy desc_date_mm_dd_yyyy
           , a11.dns dns
           , a12.vvalue vvalue
           , a12.vname vname
           , a13.vvalue vvalue0
           , a13.vname vname0
           , a14.vvalue vvalue1
           , a14.vname vname1
           , a15.vvalue vvalue2
           , a15.vname vname2
           , a16.vvalue vvalue3
           , a16.vname vname3
           , a11.pkey_guid pkey_guid
           , a11.upkey_guid upkey_guid
           , a17.day_of_week day_of_week
           , a17.d_week d_week
           , a17.mnth_id day_of_month
           , a17.year_id year_id
           , a17.desc_year_full desc_year_full
           , a17.week_id week_id
           , a17.week_of_year week_of_year
        FROM activity_f a11
             JOIN tab1 a12
                ON (    a11.pkey_guid = a12.pkey_guid
                    AND a11.date_id = a12.date_id
                    AND a11.org = a12.org
                    AND a12.variable_obj = 1)
             JOIN tab1 a13
                ON (    a11.pkey_guid = a13.pkey_guid
                    AND a11.date_id = a13.date_id
                    AND a11.org = a13.org
                    AND a13.variable_obj = 2)
             JOIN tab1 a14
                ON (    a11.pkey_guid = a14.pkey_guid
                    AND a11.date_id = a14.date_id
                    AND a11.org = a14.org
                    AND a14.variable_obj = 3)
             JOIN tab1 a15
                ON (    a11.pkey_guid = a15.pkey_guid
                    AND a11.date_id = a15.date_id
                    AND a11.org = a15.org
                    AND a15.variable_obj = 4)
             JOIN tab1 a16
                ON (    a11.pkey_guid = a16.pkey_guid
                    AND a11.date_id = a16.date_id
                    AND a11.org = a16.org
                    AND a16.variable_obj = 9)
             JOIN w_date_d a17
                ON (a11.date_id = a17.id)
             JOIN w_sales_d a18
                ON (a11.task = a18.id)
       WHERE (a17.timstamp BETWEEN TO_DATE ('2001-02-24 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
                               AND TO_DATE ('2002-09-12 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
              AND a11.org IN (12)
              AND a18.src_task = 'AX012Z')
    GROUP BY a11.date_id, a17.desc_date_mm_dd_yyyy, a11.dns, a12.vvalue
           , a12.vname, a13.vvalue, a13.vname, a14.vvalue
           , a14.vname, a15.vvalue, a15.vname, a16.vvalue
           , a16.vname, a11.pkey_guid, a11.upkey_guid, a17.day_of_week
           , a17.d_week, a17.mnth_id, a17.year_id, a17.desc_year_full
           , a17.week_id, a17.week_of_year;
    {code}
    I hope I did not miss anything while reformatting the code. I could not test it not having the proper tables.
    As I said before I'm not a tuning expert nor I pretend to be but I see this:
    1) Table W_PERSON_D is read in full scan. Any possibility of using indexes?
    2) Tables W_SALES_D, W_DATE_D,  ACTIVITY_F and W_ORG_D have TABLE ACCESS BY INDEX ROWID which definitely is not fast.
    You should provide additional information for tuning your query checking the post I mentioned previously.
    Regards.
    Al                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Large scale forte implementation

    Dear Forte experts:
    I am part of a team in an large insurance company in charge of developing an
    enterprise-wide insurnace solution. We have been approached by few vendors, one
    of which bases its architecture on Forte. We really like what we saw, however,
    neither the vendor nor us are confident nor knowledgable enough about the
    performance behaviour of Forte in a distributed computing environment which can
    be characterized as, multiple islands of processing, with
    millions-of-transactions/day, spread across wide area network.
    I am afraid we won't be able to choose Forte route unless we gain confidence on
    its performance capability in our typical environment. So any insight, examples,
    case studies that I can get from this group collective knowledge is extremly
    helpful and is greatly appreciated.
    Sincerely,
    Farhad Abar, Ph.D.

    From: Inman, Kal
    Sent: Thursday, June 12, 1997 7:07 AM
    To: [email protected]
    Subject: RE: large scale forte implementation
    Farhad
    At Andersen Windows, we have been running our Order Entry system over
    a 56K frame relay network since the systems initial deployment in Nov
    of 1994. We currently have a user base of approximately 120 PC & Mac
    clients running over the frame, with an additional internal installed
    base of approximately 50 PC & Mac workstations. This system runs on a
    single Sequent server. We soon hope to add NT clients to this mix.
    It has been our observation that Forte has not been a constraint to
    performance. When we have performance problems, it has generally been
    caused by poor design. One of are largest constraints to performance
    is the amount of data we drag across the network.
    Since the successful implementation of our Order Entry system,
    Andersen has adopted Forte as our enterprise custom development tool.
    It has allowed our development staff to concentrated on development of
    business functionality while insulating us from the complexities of
    operating systems, messaging, and maintaining platform specific code.
    We currently have several additional systems deployed using Forte.
    These systems include three Express applications, a standalone windows
    application and a mobile client application.
    I think Thomas Mercer Hursh asked a valid question "What are the
    alternatives you are considering?". I don't think you will find one
    to compare with Forte.
    Kal Inman
    Andersen Windows
    From: [email protected][SMTP:[email protected]]
    Sent: Wednesday, June 11, 1997 10:07 AM
    To: [email protected]
    Subject: large scale forte implementation
    Dear Forte experts:
    I am part of a team in an large insurance company in charge of
    developing an
    enterprise-wide insurnace solution. We have been approached by few
    vendors, one
    of which bases its architecture on Forte. We really like what we saw,
    however,
    neither the vendor nor us are confident nor knowledgable enough about
    the
    performance behaviour of Forte in a distributed computing environment
    which can
    be characterized as, multiple islands of processing, with
    millions-of-transactions/day, spread across wide area network.
    I am afraid we won't be able to choose Forte route unless we gain
    confidence on
    its performance capability in our typical environment. So any insight,
    examples,
    case studies that I can get from this group collective knowledge is
    extremly
    helpful and is greatly appreciated.
    Sincerely,
    Farhad Abar, Ph.D.

  • Career Help Needed for me and my wife..your suggestion would surely help!

    Hi Guys,
    I would really appreciate if you can solve our mid career Dilemma. Me and my wife both want to get in to SAP..Kindly suggest which module should we pick..
    My Wife's Profile: She is an Economics honors  graduate and an MBA too. she has got 7 years banking exp in two of the biggest Pvt sector Banks in India. Kindly Suggest which module would be in line with her domain Experience, she wants to get in to the functional side of SAP using the domain knowledge..
    My Profile: I am a Science graduate and MCSE, have got 7 yrs exp in Sales and Business Development, should I opt for SD, CRM or HR? Though I am more keen on SD or CRM, but I mentioned HR as I have been doing Business development in staffing industry for the last 4 yrs now..
    I am sure I would get the best advise in here..Also, do let us know the future prospects it will/may add..
    Look forward to your suggestions..
    Thanks!
    Rohit
    Edited by: Rohitgulati on Feb 7, 2012 9:11 PM

    Hi Leon,
    Thanks for your quick revert, but just wondering if Banking domain is good for FI/Co, as I guess it has more to do with Finance and not Retail Banking, I in the mean time searched another Module for Banking Industry called IS- Banking..I think, this module will be in line with the Banking Exp but whats the demand of this module in Banking sector that has to be seen!!
    Thanks for your help!
    Rohit

  • Need your suggestion ..

    Dear folks,
    Could you share to me your suggestions please .. ??
    I have report whose type is summary and it's coming from info-cubes. The info-cube before going to info-cube, the data it's sent to ODS first.
    The source of data come from CRM System.
    The case is like this, sometimes my user wanna delete the data that has been displayed in the report.
    Could you suggest me what i should do regarding this requirement .. ?? What procedure should i do ?
    Many thanks.
    regards,
    Niel.

    Deal All,
    I'd like to express my tks to you for your response.
    The user wanna to delete the data from the report permanently.
    It means, if i have data about transaction 1 for customer A, the data should be deleted from Info-Cube and also the source (CRM).
    I've ever heard the method, that we can delete the record in ODS using mark 0recordmode in ODS with 'D'.
    My questions are :
    1. If i use that method, how can i delete the data in info-cube ?
    2. Is there other method except this ?
    fyi, my data flow :
    SAP CRM -> BW : data source -> PSA -> ODS -> Cube.
    Regards,
    Niel.

  • ELearning for Large Scale System Change

    Our next project is training for a large scale company wide system upgrade. Our users are very dependent on the software that will be replaced and several departments will need to learn their job anew. Any suggestions on how to use eLearning to increase adoption and comprehension of the new software?

    Hi Lincoln,
    I've worked on a number of large-scale IT change projects for international clients. I can make a few suggestions, some Captivate related, some more general eLearning related.
    On projects like this I tend to produce three types of training: face-to-face, interactive tutorials/simulations & job aids. Ideally the three are planned together, allowing you to create a single instructional design plan. You want people to be introduced to the system, learning to be reinforced and then everybody to be supported.
    The face-to-face training usually contains lots of show and tell, where the users are shown how to do tasks and then have a go at doing this themselves. Ideally a number of small tasks are shown and repeated, then they are all brought together by getting the learners to follow realistic scenarios that require all of the discrete tasks to be performed together. I find that lots of training doesn't integrate deeply with people's real-life jobs and the use of real world scenarios helps to improve retention and performance. I have made materials where the show-and-tell pieces have been pre-recorded, which you can do with Captivate.
    The interactive tutorials are usually used as follow-on material to the face-to-face modules, allowing learners to go through simulations with guidance when they do something unexpected, though sometimes there is no face-to-face training and the interactive tutorials have to deliver all of the teaching. Sometimes these include sections that merely show users and ask questions afterwards. Sometimes they become very complex branching scenarios. I usually build all of this in Captivate, though I do find it quite buggy and frustrating at times.
    Finally, I build small job aids. These are very specific and show how to do a well defined function, such as changes an existing customer's address. Sometimes these are Captivate movies, sometimes they are PDF files, often they are implemented as both. They can be embedded and/or linked from the system help screens and FAQs, as well as used in support responses and post-training emails. The movies tend to be short and sweet: 30-120 seconds long.
    In an ideal world the number of job aids grows rapidly after implementation in response to user support requests, though in reality I often have to anticipate what users are going to find difficult and create all of these prior to launch.
    If you are going to use Captivate for your project, then I suggest that you test, test and test again before agreeing what you will deliver. It's a great bit of software, in theory, but it is quite buggy. I'm working on a project with CP6.1 and I'm having lots of audio synch problems and video corruption issues publishing my final work to as MP4 videos.
    In terms of effort, my rule of thumb is 20% planning, 60% design and scripting and 20% implementation.
    I hope this helps,
    David

  • Typical/Common large-scale ACE deployment or designs?

    I am deploying several ACE devices and GSS devices to facilitate redundancy and site load balancing at a couple of data centers.  Now that I have a bunch of experience with the ACE and GSS, are there typical or common ACE deployment methods?  Are there reference designs?  I have been looking, and haven't really found any.
    Even if they are not Cisco 'official' methods, I'm wondering how most people, particularily those who deploy a lot of these or deploy them with large-scale systems, typically do it.
    I'm using routed mode (not one-arm mode) and I'm wondering if most people use real server (in my case, web servers) with dual-NICs to support connectivity to back-end systems?  Or do people commonly just route it all through the ACE?
    Also, how many VIPs and real servers have been deployed in a single ACE 4710 device?  I'm trying to deploy about 700 VIPs with about 1800 Real Servers providing content to those VIPs.
    How do people configure VIPs, farms, and sticky?  I'm looking for how someone who wants to put a large ammount of VIPs and real servers into the ACE would succeed at doing it.  I have attempted to add a large number in the 'global' policy-map, but that uses too many PANs.
    I have tried a few methods myself, and have run into the limit on Policy Action Nodes (PANs) in the ACE device.  Has anyone else hit this issue?  Any tips or tricks on how to use PANs more conservitively?
    Any insight you can share would be appreciated.
    - Erik

    As far as i can see from your requirements i suggest you create 1 ear file for your portal and 1 ear file per module.
    The ear file from your portal is the main application and the ear files of your modules are shared libraries that contain the taskflows. These taskflows can be consumed in the portal application.
    This way, you can easily deploy 1 module without needing to deploy the main application or the other application.
    It also let you devide your team of developer so everybody can work on a sepperate module without interfering.
    On a sidenote: when you have deployed your main application, and later you create a new module, than you have to register that module to your application so then you will need to redeploy your portal but if you update an existing module, you won't need to redeploy your portal.
    As for the security, all your modules will inherit the security model of your portal application.

  • Large scale displays

    I have a project that requires creating a 8'x26' (one-way see thru) vinyl window film for display.
    The sign will include both text and large scale photo stock images. I have never developed anything that large for print before and I'm concerned about pixelation or blurred images. Does have a suggestion on what size images and what type of file format to use for the final art for production?

    Like Monika, I recommend talking to the print vendor first.  That goes for any job.  They will tell you what they need.  I can also recommend you work primarily on an .ai file for construction of the window display.  Look at it as a few "stages" of construction.  Stage 1 = .ai file; Stage 2 = .eps file; Stage 3 = PDF file.  The reason I recommend doing it this way is you can do all of your layer effects, typography, and image placement on the .ai file ( or, I refer to it as your "live" file ).  Work at 25% final size ( in your case, 24"h x 78"w ).  Make sure your document color space is CMYK ( and your image files ).  Set the document raster resolution to high if it isn't already.  Place images at 100% in the scaled version at 300 to 400ppi ( depending on print vendor requirements ).  There should not be any visible pixelation or fuzziness at the final size ( again, depends on print vendor capabilities ).  Save the original .ai as a copy to EPS ( this will flatten the file ).  In the EPS, convert the text to outlines ( do not forget to leave the original .ai file with live text elements in case you need to edit the text in the future ).  Double check all of your overprints and make sure White elements knockout.  Distill ( or Print > Save as PDF ) the EPS to PDF using High Quality Print or Press setting, leave color unchanged. Use this info as a guide.  Final file requiremnts to be determined by the print vendor.

  • Line appears when applying drop shadow on large scale

    Hello!
    Some weeks ago I had to make a large scale graphic (800mmx2000mm) for a roll-up banner. I wanted to apply a drop shadow to a rounded shape, and ugly lines came up. Since it was a bit urgent, I decided not to use it.
    But now I'm curious, so I quickly made an ellipse and added a shadow, so you know what I mean. This also happens when I save it as pdf or image.
    Perhaps someday I will have to use a drop shadow on large scale. So, does anybody knows how to fix this or what could I do in case I need to use this effect in these conditions? I use Illustrator CS6 in Mac with Mavericks.
    Thanks in advance.

    Mike Gondek wrote:
    I was able to create an ellipse to your dimension and got a good drop shadow. What happens if you manually make a drop shadow using appearance?
    FYI I tried making the same using drop shadow filter in CS5 and got this error.
    Incase your file was created in CS5 and opened in 6, I would recreate the drop shadow in CS6. I know they redid the gaussian blur in CS6, but not sure if that affected drop shadow.
    CS6 is better on raster effects at large sizes.
    My file was created and opened in CS6.
    My ellipse is around 175cmx50cm. I tried it manually like you said and got the same results:
    So, I guess I'm alone with such a problem. No idea what is wrong :/

  • Large Scale Digital Printing Guidelines

    Hi,
    I'm trying to get a getter handle on the principles and options for creating the best large and very large scale prints from digital files.  I'm more than well versed in the basics of Photoshop and color management but there remain some issues I've never dealt with.
    It would be very helpful if you could give me some advice about this issue that I've divided into four levels.  In some cases I've stated principles as I understand them.  Feel free to better inform me.  In other cases I've posed direct questions and I'd really appreciate professional advice about these issues, or references to places where I can learn more.
    Thanks alot,
    Chris
    Level one – Start with the maximum number of pixels possible.
    Principle: Your goal is to produce a print without interpolation at no less than 240 dpi.  This means that you need as many pixels as the capture device can produce at its maximum optical resolution.
    Level two – Appropriate Interpolation within Photoshop
    Use the Photoshop Image Size box with the appropriate interpolation setting (Bicubic Smoother) to increase the image size up to the maximum size of your ink jet printer.
    What is the absolute minimum resolution that is acceptable when printing up to 44”?
    What about the idea of increasing your print size in 10% increments? Does this make a real difference?
    Level three - Resizing with vector-based applications like Genuine Fractals?
    In your experience do these work as advertized and do you recommend them for preparing files to print larger than the Epson 9900?
    Level four – Giant Digital Printing Methods
    What are the options for creating extremely large digital prints?
    Are there web sites or other resources you can point me to to learn more about this?
    How do you prepare files for very large-scale digital output?

    While what you say may be true, it is not always the case. I would view a 'painting' as more than a 'poster' in terms of output resolution, at least in the first stages of development. Definately get the info from your printer and then plan to use your hardware/software setup to give you the most creative flexibility. In other words - work as big as you can (within reason, see previous statement) to give yourself the most creative freedom. Things like subtle gradations and fine details will benefit from more pixels, and can with the right printer be transferred to hard copy at higher resolutions (a photo quality ink jet will take advantage of 600ppi) if that's what you're going for.
    Additionally it's much easier to down scale than to wish you had a bigger image after a 100 hours of labor...

  • Which approach to take ? Request your suggestions

    Hello all,
    We are planning to implement custom work flow for a process in our group. The work flow is not related to any transaction in SAP. It is a request for a form release, where the entire process is happening outside SAP.
    Please find the following details about the requirement :
    1. We have 200 companies, with a total staff strength of 20k
    2. Each company has different approval levels ( In one company, GM needs to approve this process, where as in another one HR executive's approval is enough)
    3. We may need to integrate this form release with leave workflow ( we don't use leave workflow as of now) in future, so that this workflow should be triggered when a leave workflow is initiated.
    Following are the options we found :
    1. Use SAP custom workflow  - We don't have a workflow consultant, hence is it tough  to achieve ?
    2. Use Portal   - Guided Procedures ?
    3. Use a custom developed portal - Can it integrate with SAP in future?
    Apart from these, is there any other way we can implement this?
    Which of these options do you suggest?
    For some of you, these may be real basic questions.
    I would appreciate your suggestions on this.
    Thanks in advance
    Shobin

    Hello Vijay,
    Thanks for your response.
    If we use SAP custom workflow, does it have the capability to have multiple levels of approvals for different companies using our R/3 Instance?
    For example, Company A may have 3 levels of approvals required to complete this process, while company B may just need one approver step. Is there a way to handle this?
    Its a really basic question, but I am not a workflow consultant. So, can you please explain me more on this?
    Thanks
    Shobin
    >
    vijay kumar wrote:
    > Hi shobin jose
    >  
    >   As you said
    >
    >
    where the entire process is happening outside SAP.
    >
    >  getting the data from SAP and all other process is done in front if your using JSP or ASP or any fron end tool. Then design only the workflow only in SAP. If you want to create new front end its would be very difficult. I would suggest then go for portal from SAP any front end tool provided by SAP. Don't try to develop a front end tool for workflow it will be very very difficult to get the mails and inbox every thing has to design. Instead SAP itself provides front end tool. You just buy based on your requirment.
    >
    > Regards
    > vijay

  • Flex deployed on a large scale?

    We plan on developing a new product and Flex popped in my
    mind as a development platform. I know a good deal of Flex 1.5, but
    only used it for personal sites.
    My question is how well Flex behaves in a large scale
    environment to those who have deployed it in such. Server load will
    be at least thousand / day.
    Thanks!

    Hmm... I made one medium sized application in 1.5 (approx 10
    screens, user access <1000 times per day) and it seems to be
    working alright for the client.
    Now I am working on a major application (over 20 main
    screens, and definately access>1000 times per day) and it is not
    going well. I am really worried about the bugs and memory issues of
    Flex 2.0. I have also not found a sure-fire way to address these
    potential issues. I can say this: for the size of application we
    are making, Flex and Flash Player just aren't up to the job.
    Compiled and executed as a single .swf application results in 755MB
    ram usage and for some reason a constant CPU access of 60% (Pentium
    4 proc.) after accessing every screen. And this is just FlashPlayer
    doing what it is supposed to. Me, not being a computer engineer,
    can't really address these problems. Flex and AS aren't C. I can't
    control memory usage with my code. By breaking up the huge
    application into smaller ones and then loading those via an
    SWFLoader I may be able to avoid this rampant resource hogging but
    it's sort of illogical from an application architecture standpoint
    because this is ONE application.
    As a developer, I can see plenty of places to streamline the
    application but this simply isn't possible when dealing with the
    client. They want this screen to look and act this way and that
    screen to look and act the other way. I can talk about how if both
    screens use the same layout and logic they can both use the same
    template class, share static resources, blah blah until I am blue
    in the face but it won't matter because they are the client and
    they decide how the application is going to look--at the expense of
    streamlining. That's just the real world. Then I have to somehow
    make it work.
    By the way... before you think "just use view states!", I do
    use those--and bitwise logic flags for more complicated
    configurations--it's still not enough, although it did cut approx
    160 screens in documentation form down to just 20 in
    implementation.
    In worst case scenarios, I have to deny the client what they
    want and if they ask me why, I have to reply "it can't be done with
    Flex". Then their satisfaction in the product drops. Flex suddenly
    isn't as incredible as it seemed at first. Doesn't matter how
    pretty and animated the screens are when if you run them over an
    hour your computer slows to a halt or .ttc fonts stop loading (HUGE
    issue here in Japan).
    I have yet to see a sample application that comes close to
    the scale of our current project: A library book browser? neat but
    that's just square one; A Commodore 64 emulator? cute. no place in
    business; A real-estate browswer? in our project that would be the
    equivalent of ONE SCREEN out of the entire application.
    I like Flex. It's fun--on a small scale. But I never want to
    develop a real world business application using it again. There are
    way more (and way more skilled) Java, JSP, PHP, etc. etc.
    developers out there than Flex developers who can make much more
    robust applications. It's a shame the client got caught up in the
    hype of Flex RIA before the technology was ready for the task.
    Very long story short: Beware using Flex for an involved
    application.
    It's going to require exponentially more time than a smaller,
    less ambitious project--especially if you don't purchase FDS. And
    oh my god implementing a Flex application on a legacy Struts
    framework... kill me now! As much as I hate "page-refresh"
    applications, Flex (both 1.5 and 2.0) has not proven to be the
    god-send that I had hoped and dreamed it would be as a developer.
    What can you expect, though? It's only been out a few years.... And
    as far as clients' perspectives go, the price for FDS also
    certainly doesn't help make it appealing. That is why it is so
    embarrassing to tell them their dream application is quickly
    becoming an egregious memory hog.
    Anyway, good luck if you take on your project with flex. Just
    be careful!

  • Imac 3g - Installing CF card instead of HDD - your suggestions ?

    Hello all!
    I've purchased Imac Graphite 600mhz version and im thinking of retrofitting a Compact Flash card instead of hard drive. I suppose there is someone who did this already, what are your suggestions and experiences ?

    Last year was my big buying decision for a camera. If you think I'm over thinking things with my editing PC build now, imagine then! hehehe! I ended up buying a Panasonic HPX370 with P2 cards. A lot more expensive, but well worth it after testing all the other cameras in a specific "price" range on the market. But back then the XF series was not out yet. For me 4:2:2 is a must. Plus the fact it's a real shoulder mount, with a real manual lens, and I could put a lectrosonic unislot receiver... well, you get the picture.
    Honestly, I was not impressed with the Sony EXs. The picture is nice, but the ergonomics are wrong. Sure they have a larger sensor, but 35Mbps 4:2:0 long GOP that are stored on cards priced the same as P2? P2 cards can handle 100Mbps... 35Mbps is compact enough for SDHC cards. And That's why JVC uses them, with the same codec as Sony's. Ridiculous if you ask me. But that's typical to Sony!  Another bummer with the EX that turned me off? I regularly need to review footage, and switching to and from CAM and VTR takes a whooping 25 seconds!  Woohaaaa!!! Another strike for EX!
    But if I were looking at a hand held sized cameras, and if the XF was out when I tested all the cameras, the 50Mbps 4:2:2 codec looks mighty interesting to me. Even if it's long GOP.  They are also resonably priced cameras.
    All this to say, when I was seriously considering the JVC HM700, my card of choice was the Transcend series. After much research, they offer the best price/capacity/quality ratio. And the JVC reseller I delt with swore by them. Of course they are SDHC cards and not CF. But I just thought my anecdote would help steer things in the right direction.

  • Large scale Connect Session, 3 Continents, 1000s of participants, tips?

    I am doing a large scale webinar using Adobe Connect with 20 different presenters in three different continents.  I will be the host, but will give up host duties to others who will act as technical team members in the three regions (Europe, Americas, Pacific).  This sounds like alot of moving parts, and it is my first show with this software.
    Does anyone have any tips on this level of participation?  Any information would be helpful...
    Thanks

    Congrats. Lots of customers are doing that today. How many participants? 1000's?
    And you are running this on your own server or Adobe Hosted?
    You using VOIP or using an integrated audio provider?
    Suggestions:
    1. Hosts should avoid using wireless connections as they are always risky for interference. Best practice is to use wired ethernet.
    2. Keep the use of webcams to the start but after the meeting gets going reduce the use of webcam video as sometimes it can be distracting....all depends on the focus of your meeting
    3. Mute everyone but the hosts to keeo audio interference and noise to a minimum
    4. Suggest that VOIP users have proper headsets designed for VOIP. This is s best practice.
    5. Make sure you add a nice background image to the room using preferences...personalize it
    6. Use the Q&A Pod to field questions.....download the multi-language chat pod from the Adobe Connect Exchange....trnaslates chat into various languages on the fly....very cool with the U.N.

  • Is it possible to simulate large-scal​e non-linear systems using the Simulation Toolkit?

    Hi,
    I am new to LabVIEW, having used Matlab/simulink for a few years. I am trying to simulate a relatively complex non-linear vehicle model. In Matlab/Simulink I would use an m-file to describe the system equations (i.e. x1_dot=..., x2_dot=..., etc.). Is there a similar method in LabVIEW? I have got a simple simulation running using the 'eval formula node', a very ungainly method of substituting variables, and an integrator in a 'simulation node'. It takes about an hour to run a simulation that takes 3 seconds in Matlab. I just need to know if it is realistic to do large-scale simulations in LabVIEW, or if it was not designed for that and I should persuede the management to go
    back to Matlab!!!
    If it is possible, where can I find help on the subject? I have spent a long time looking on the web and come up with very little.
    Many thanks in advance,
    Paul.

    If these are simple differential equations, easiest would be a for loop with a bunch of shift registers, one for each x1, x2, etc. containing the current values.
    In your case, you would calculate all the derivatives from the instantaneous values inside the loop, then add them to each value before feeding them to their respective shift register again.
    The attached very simple example shows how to generate an exponential decay function using the formula dx/dt= kx (and k is negative). The shift register in initialized with the starting condition x=1.
    LabVIEW Champion . Do more with less code and in less time .
    Attachments:
    DiffEQ.vi ‏33 KB

Maybe you are looking for