Re: Single "DBSession" Approach

On October 8th, Eric Gold wrote....
Of course, along with the transaction theservice can also provide to the database security information indicating for
which client the transaction is actually for. This security information
could be in the form of an extra column in a table indicating UPDATED_BY or
something like that.and voila, no longer a dilemma knowing which end user
is actually the originator of transaction. <<<
Eric,
A valid point, but Alexander also raised an issue in his original posting that
it is possible for someone who is not using a Forte application/security scheme
to mung the database directly, without necessarily leaving any fingerprints. Now
this is probably the case in 99.99% of all RDBMS installations in existence that
don't use Forte, even with the vendor DB security schemes turned on, simply
because most organizations don't take security seriously enough.
So, this leads me to a question...in a B1 secure operating system (eg. SE-VMS)
you have MAC (mandatory access control) schemes, with multiple hierarchical
levels of controls on files, processes, devices, programs, differing security
clearance levels for users and programs, and varying degrees of 'secret'. Having
not thought this through completely, perhaps Alexander's requirements can be
largely accomodated using a B1category operating system and assigning
appropriate controls on the installed DBMS and Forte partitions, in addition to
the thoughts you had on the matter.
Are there any known SEVMS or other B1 secure installations which are running
Forte, or is that a secret, which if disclosed will force you to kill me? <g>
Alan

Forte people,
this brings up a good point, I'm developing a WEB/SDK forte interface
into
our Forte business services. I'm still trying to figure out how I'm
going to get
client-side web functionality happening. I don't see how I'm supposed to
write
Java call-ins to the Forte objects ??? Will ALL my access to the Forte
objects
have to be on the server side ? and would CORBA call-in help me here ? I
guess I could write a bunch of CGI C++ programs that went to Forte via
CORBA ??
thanks,
-carl.You should check out VisiBroker for Java from VISIGENIC
(http://www.visigenic.com/), just to quote a small section of there material
"Software objects created with VisiBroker for Java conform to the CORBA 2.0 and
IIOP standards, and are accessible by other CORBA 2.0/IIOP-compliant objects in
a distributed objects computing environment"
Note that Visigenic is not the only company offering this type of product.
...Wayne

Similar Messages

  • Re: forte-users-digest V1 #89

    forte-users-digest Tuesday, 8 October 1996 Volume 01 : Number 089
    From: Alexander Ananiev <[email protected]>
    Date: Tue, 08 Oct 1996 16:36:14 -0500
    Subject: "Single DBSession" approach
    The standard Forte approach for the database access assumes
    that one DBSession handles requests from several users
    (clients), so the database connection is not associated with
    the particular user. This significantly impacts the
    architecture of the Forte application. Some of the problems
    caused by this approach are:
    1) An application-level security system should be developed
    instead of using the DBMS security system. Suffice it to say
    that application-level security system cannot provide the
    protection from back-door access to the database.
    2) The application cannot utilize the DBMS locking mechanism
    for the case when the record is retrieved to the client for
    editing purposes ("long" transaction).
    It means that SecurityManager and LockManager should be
    developed to resolve these problems. This does not seem to be
    very good solution because these objects are intended to repeat
    the functionality of the DBMS. And these parts of the
    application may become pretty complicated. For example, my
    project's experience shows that the development of the lock
    manager is not a trivial task and most likely this lock manager
    will be worse than the DBMS locking mechanism in terms of
    reliability and performance just because it acts as an outside
    program to the database. Besides, this approach could cause
    serious problems if the database can be updated by non-Forte
    applications (e.g., by some legacy system or batch process).
    "One DBSession per several users" approach makes sense if each
    user connection to the database is implemented as one
    server process.(Another good argument in favor of single
    DBsession is a heterogeneous environment where there is no
    stable connection to one database, but here I'm talking about a
    "regular" application that uses only one DBMS.) Since most of
    the time this process is idle, then, of course, decreasing the
    number of processes leads to better server utilization and
    performance. But by its sense, database connection is just the
    current transaction ID (with the user id, of course). So the
    connection could be just a number that should be passed to the
    DBMS along with each request and then DBMS can create a thread
    to handle the request or forward it to the next available
    process if it does not support multithreading.
    DBMS vendors realize this and some of them already implemented
    this approach (I know that Oracle and Informix did that and
    Sybase was mentioned in the recent "one-threaded DBSession"
    discussion). And one-threaded DBSession that lives on the
    server doesn't fit well to that. The better approach would be
    to make DBSession the attribute of the TransactionHandle
    object, so the current connection will always be passed from
    the client to the service and this service can work through
    this connection.
    So, my point is that the application should let DBMS do its
    work and use as much of the functionality of the DBMS as
    possible and the "single DBsession" approach doesn't help it to
    do that.
    I would be glad to hear any other opinion on this topic. I
    think that the "DBsession" problem is extremely important for
    any multi-tier application (for example, all Web applications
    are facing this problem). I'm also interested in how people
    are dealing with this problem on other projects, for example if
    there are some projects where the alternative approach (one
    DBSession per each user) was implemented and what problems were
    encountered during that.
    Alexander Ananyev
    Price Waterhouse
    End of forte-users-digest V1 #89
    One of the first issues that needs to be addressed is that passing
    DBSessions from partition to partition is a huge performance hit. When
    Forte executes a SQL SELECT or FETCH statement on a DBSession that exists
    outside the current partition (DBSessions are "anchored" objects that are
    accessed via proxies.) Forte fetches the result set into the partition
    containing the DBSession and then passes proxies or creates copies into
    the partition where the SQL code is located. These are some of the
    largest performance hits you can take in Forte.

    forte-users-digest Tuesday, 8 October 1996 Volume 01 : Number 089
    From: Alexander Ananiev <[email protected]>
    Date: Tue, 08 Oct 1996 16:36:14 -0500
    Subject: "Single DBSession" approach
    The standard Forte approach for the database access assumes
    that one DBSession handles requests from several users
    (clients), so the database connection is not associated with
    the particular user. This significantly impacts the
    architecture of the Forte application. Some of the problems
    caused by this approach are:
    1) An application-level security system should be developed
    instead of using the DBMS security system. Suffice it to say
    that application-level security system cannot provide the
    protection from back-door access to the database.
    2) The application cannot utilize the DBMS locking mechanism
    for the case when the record is retrieved to the client for
    editing purposes ("long" transaction).
    It means that SecurityManager and LockManager should be
    developed to resolve these problems. This does not seem to be
    very good solution because these objects are intended to repeat
    the functionality of the DBMS. And these parts of the
    application may become pretty complicated. For example, my
    project's experience shows that the development of the lock
    manager is not a trivial task and most likely this lock manager
    will be worse than the DBMS locking mechanism in terms of
    reliability and performance just because it acts as an outside
    program to the database. Besides, this approach could cause
    serious problems if the database can be updated by non-Forte
    applications (e.g., by some legacy system or batch process).
    "One DBSession per several users" approach makes sense if each
    user connection to the database is implemented as one
    server process.(Another good argument in favor of single
    DBsession is a heterogeneous environment where there is no
    stable connection to one database, but here I'm talking about a
    "regular" application that uses only one DBMS.) Since most of
    the time this process is idle, then, of course, decreasing the
    number of processes leads to better server utilization and
    performance. But by its sense, database connection is just the
    current transaction ID (with the user id, of course). So the
    connection could be just a number that should be passed to the
    DBMS along with each request and then DBMS can create a thread
    to handle the request or forward it to the next available
    process if it does not support multithreading.
    DBMS vendors realize this and some of them already implemented
    this approach (I know that Oracle and Informix did that and
    Sybase was mentioned in the recent "one-threaded DBSession"
    discussion). And one-threaded DBSession that lives on the
    server doesn't fit well to that. The better approach would be
    to make DBSession the attribute of the TransactionHandle
    object, so the current connection will always be passed from
    the client to the service and this service can work through
    this connection.
    So, my point is that the application should let DBMS do its
    work and use as much of the functionality of the DBMS as
    possible and the "single DBsession" approach doesn't help it to
    do that.
    I would be glad to hear any other opinion on this topic. I
    think that the "DBsession" problem is extremely important for
    any multi-tier application (for example, all Web applications
    are facing this problem). I'm also interested in how people
    are dealing with this problem on other projects, for example if
    there are some projects where the alternative approach (one
    DBSession per each user) was implemented and what problems were
    encountered during that.
    Alexander Ananyev
    Price Waterhouse
    End of forte-users-digest V1 #89
    One of the first issues that needs to be addressed is that passing
    DBSessions from partition to partition is a huge performance hit. When
    Forte executes a SQL SELECT or FETCH statement on a DBSession that exists
    outside the current partition (DBSessions are "anchored" objects that are
    accessed via proxies.) Forte fetches the result set into the partition
    containing the DBSession and then passes proxies or creates copies into
    the partition where the SQL code is located. These are some of the
    largest performance hits you can take in Forte.

  • Automatic Chapter Numbers in a Single Document?

    Hi -
    Just trying to figure out if there is a way to get auto-numbered chapters in a single document. I've read you can do chapters (autonumbered?) with an InDesign "Book" with separate files, but can you do it in one document?
    All I've found so far is section markers, which are manual, and will get completely messed up if you reorder things.
    Thanks!
    Tom

    I too think a book with discrete sections and/or chapters calls for the book feature over the single file approach. (You might be surprised how easy it is to break up your file into chapters. Simply drag and drop entire chapters into a new file.)<br /><br />I also concur with Dominic re styles. Synchronising styles is just as straight forward as editing styles within a single document: both can be dangerous if done carelessly. <g> And now you can even synchronize master pages.<br /><br />I would question Rodney's assertion that searches are easier in a single document. Other than the necessity of having the documents open I don't see the difference.

  • Scalability of single servlet for application

              Hi,
              The J2EE blue prints recomends the single servlet approach as a controller and
              entry point for the application. How does this scale up in Weblogic server for
              a website with a very heavy load of many concurrent users. Right now we are planning
              to use Weblogic 6.1 for the portal development
              REgards
              Barath
              

    WLS 5.1 only uses a single instance per servlet, but it is multi-thread. So if your
              service method is not synchronized, or not using any synchronizing within service
              method, there should be no scale up problem.
              Just curious, why doesn't WLS webserver use several instances under heavy load?
              minjiang
              Mike Reiche wrote:
              > There is no scaling issue here. Hitting a single servlet many times is equivalent
              > (processing-wise) to hitting many servlets fewere times.
              >
              > Mike
              >
              > "barath" <[email protected]> wrote:
              > >
              > >Hi,
              > > The J2EE blue prints recomends the single servlet approach as a controller
              > >and
              > >entry point for the application. How does this scale up in Weblogic server
              > >for
              > >a website with a very heavy load of many concurrent users. Right now
              > >we are planning
              > >to use Weblogic 6.1 for the portal development
              > >
              > >REgards
              > >Barath
              

  • Single step request submit to conc mgr

    Does anyone have an example of a fnd_request.submit_request with the new “single step” approach where you provide the template? I can’t find any doc on the parameter name to specify the template to use in the output. Basically how to specify the additional output options when submitting a request via PL/SQL.
    Thanks.

    Thanks! A Metlink search found Note: 308658.1 which seems to spell it out. Now to try it.

  • Single domain concepts

    Hi....
    have question...my company have a main office and several branch offices...at present each location have individual domains and no trust relations between each other...now all are operating independently...I m planing to consolidate branch office DC to head
    office..I know about two way trust concepts...but the the another approach I am looking in to is the OU level domain hierarchy to simplify the process...I mean keep main office domain as a main domain and add other branch offices as OUs under the main domain...then
    restrict the permission and admin rights to each location`s system admin.. As its 2012 domain i think we can apply group policies on OU level which applied to those particular branch office users...also users from one branch can use their log in ID in all
    locations.....is this single domain approach is industry slandered..?..also is there any possible challenges on this concept ..? compared to  the traditional forest concept of multiple domains?? Looking for your expert advice.
    what my idea is  

    There is no "industry standard" design: the logical structure design depends on the needs and requirements that exists in your organization. Based on description of your environment, using single domain with OU-structure for delegation of
    administration to regional staff, as well as, for scoping of group policy, is a "by the book" solution, that matches your needs quite well. Take a look at
    AD DS Design Guide - it describes guidelines for designing Active Directory infrastructure.  Web version of deign guide is also available on
    TechNet.
    Gleb.

  • Re: Transactions and Locking Rows for Update

    Dale,
    Sounds like you either need an "optimistic locking" scheme, usually
    implemented with timestamps at the database level, or a concurrency manager.
    A concurrency manager registers objects that may be of interest to multiple
    users in a central location. It takes care of notifying interested parties
    (i.e., clients,) of changes made to those objects, using a "notifier" pattern.
    The optimistic locking scheme is relatively easy to implement at the
    database level, but introduces several problems. One problem is that the
    first person to save their changes "wins" - every one else has to discard
    their changes. Also, you now have business policy effectively embedded in
    the database.
    The concurrency manager is much more flexible, and keeps the policy where
    it probably belongs. However, it is more complex, and there are some
    implications to performance when you get to the multiple-thousand-user
    range because of its event-based nature.
    Another pattern of lock management that has been implemented is a
    "key-based" lock manager that does not use events, and may be more
    effective at managing this type of concurrency for large numbers of users.
    There are too many details to go into here, but I may be able to give you
    more ideas in a separate note, if you want.
    Don
    At 04:48 PM 6/5/97 PDT, Dale "V." Georg wrote:
    I have a problem in the application I am currently working on, which it
    seems to me should be easily solvable via appropriate use of transactions
    and database locking, but I'm having trouble figuring out exactly how to
    do it. The database we are using is Oracle 7.2.
    The scenario is as follows: We have a window where the user picks an
    object from a dropdown list. Some of the object's attributes are then
    displayed in that window, and the user then has the option of editing
    those attributes, and at some point hitting the equivalent of a 'save'button
    to write the changes back to the database. So far, so good. Now
    introduce a second user. If user #1 and user #2 both happen to pull up
    the same object and start making changes to it, user #1 could write back
    to the database and then 15 seconds later user #2 could write back to the
    database, completely overlaying user #1's changes without ever knowing
    they had happened. This is not good, particularly for our application
    where editing the object causes it to progress from one state to the next,
    and multiple users trying to edit it at the same time spells disaster.
    The first thing that came to mind was to do a select with intent to update,
    i.e. 'select * from table where key = 'somevalue' with update'. This way
    the next user to try to select from the table using the same key would not
    be able to get it. This would prevent multiple users from being able to
    pull the same object up on their screens at the same time. Unfortunately,
    I can think of a number of problems with this approach.
    For one thing, the lock is only held for the duration of the transaction, so
    I would have to open a Forte transaction, do the select with intent to
    update, let the user modify the object, then when they saved it back again
    end the transaction. Since a window is driven by the event loop I can't
    think of any way to start a transaction, let the user interact with the
    window, then end the transaction, short of closing and re-opening the
    window. This would imply having a separate window specifically for
    updating the object, and then wrapping the whole of that window's event
    loop in a transaction. This would be a different interface than we wanted
    to present to the users, but it might still work if not for the next issue.
    The second problem is that we are using a pooled DBSession approach
    to connecting to the database. There is a single Oracle login account
    which none of the users know the password to, and thus the users
    simply share DBSession resources. If one user starts a transaction
    and does a select with intent to update on one DBSession, then another
    user starts a transaction and tries to do the same thing on the same
    DBSession, then the second user will get an error out of Oracle because
    there's already an open transaction on that DBSession.
    At this point, I am still tossing ideas around in my head, but after
    speaking with our Oracle/Forte admin here, we came to the conclusion
    that somebody must have had to address these issues before, so I
    thought I'd toss it out and see what came back.
    Thanks in advance for any ideas!
    Dale V. Georg
    Indus Consultancy Services [email protected]
    Mack Trucks, Inc. [email protected]
    >
    >
    >
    >
    ====================================
    Don Nelson
    Senior Consultant
    Forte Software, Inc.
    Denver, CO
    Corporate voice mail: 510-986-3810
    aka: [email protected]
    ====================================
    "I think nighttime is dark so you can imagine your fears with less
    distraction." - Calvin

    We have taken an optimistic data locking approach. Retrieved values are
    stored as initial values; changes are stored seperately. During update, key
    value(s) or the entire retieved set is used in a where criteria to validate
    that the data set is still in the initial state. This allows good decoupling
    of the data access layer. However, optimistic locking allows multiple users
    to access the same data set at the same time, but then only one can save
    changes, the rest would get an error message that the data had changed. We
    haven't had any need to use a pessimistic lock.
    Pessimistic locking usually involves some form of open session or DBMS level
    lock, which we haven't implemented for performance reasons. If we do find the
    need for a pessimistic lock, we will probably use cached data sets that are
    checked first, and returned as read-only if already in the cache.
    -DFR
    Dale V. Georg <[email protected]> on 06/05/97 03:25:02 PM
    To: Forte User Group <[email protected]> @ INTERNET
    cc: Richards* Debbie <[email protected]> @ INTERNET, Gardner*
    Steve <[email protected]> @ INTERNET
    Subject: Transactions and Locking Rows for Update
    I have a problem in the application I am currently working on, which it
    seems to me should be easily solvable via appropriate use of transactions
    and database locking, but I'm having trouble figuring out exactly how to
    do it. The database we are using is Oracle 7.2.
    The scenario is as follows: We have a window where the user picks an
    object from a dropdown list. Some of the object's attributes are then
    displayed in that window, and the user then has the option of editing
    those attributes, and at some point hitting the equivalent of a 'save' button
    to write the changes back to the database. So far, so good. Now
    introduce a second user. If user #1 and user #2 both happen to pull up
    the same object and start making changes to it, user #1 could write back
    to the database and then 15 seconds later user #2 could write back to the
    database, completely overlaying user #1's changes without ever knowing
    they had happened. This is not good, particularly for our application
    where editing the object causes it to progress from one state to the next,
    and multiple users trying to edit it at the same time spells disaster.
    The first thing that came to mind was to do a select with intent to update,
    i.e. 'select * from table where key = 'somevalue' with update'. This way
    the next user to try to select from the table using the same key would not
    be able to get it. This would prevent multiple users from being able to
    pull the same object up on their screens at the same time. Unfortunately,
    I can think of a number of problems with this approach.
    For one thing, the lock is only held for the duration of the transaction, so
    I would have to open a Forte transaction, do the select with intent to
    update, let the user modify the object, then when they saved it back again
    end the transaction. Since a window is driven by the event loop I can't
    think of any way to start a transaction, let the user interact with the
    window, then end the transaction, short of closing and re-opening the
    window. This would imply having a separate window specifically for
    updating the object, and then wrapping the whole of that window's event
    loop in a transaction. This would be a different interface than we wanted
    to present to the users, but it might still work if not for the next issue.
    The second problem is that we are using a pooled DBSession approach
    to connecting to the database. There is a single Oracle login account
    which none of the users know the password to, and thus the users
    simply share DBSession resources. If one user starts a transaction
    and does a select with intent to update on one DBSession, then another
    user starts a transaction and tries to do the same thing on the same
    DBSession, then the second user will get an error out of Oracle because
    there's already an open transaction on that DBSession.
    At this point, I am still tossing ideas around in my head, but after
    speaking with our Oracle/Forte admin here, we came to the conclusion
    that somebody must have had to address these issues before, so I
    thought I'd toss it out and see what came back.
    Thanks in advance for
    any
    ideas!
    Dale V. Georg
    Indus Consultancy Services [email protected]
    Mack Trucks, Inc. [email protected]
    ------ Message Header Follows ------
    Received: from pebble.Sagesoln.com by notes.bsginc.com
    (PostalUnion/SMTP(tm) v2.1.9c for Windows NT(tm))
    id AA-1997Jun05.162418.1771.334203; Thu, 05 Jun 1997 16:24:19 -0500
    Received: (from sync@localhost) by pebble.Sagesoln.com (8.6.10/8.6.9) id
    NAA11825 for forte-users-outgoing; Thu, 5 Jun 1997 13:47:58 -0700
    Received: (from uucp@localhost) by pebble.Sagesoln.com (8.6.10/8.6.9) id
    NAA11819 for <[email protected]>; Thu, 5 Jun 1997 13:47:56 -0700
    Received: from unknown(207.159.84.4) by pebble.sagesoln.com via smap (V1.3)
    id sma011817; Thu Jun 5 13:47:43 1997
    Received: from tes0001.macktrucks.com by relay.macktrucks.com
    via smtpd (for pebble.sagesoln.com [206.80.24.108]) with SMTP; 5 Jun
    1997 19:35:31 UT
    Received: from dale by tes0001.macktrucks.com (SMI-8.6/SMI-SVR4)
    id QAA04637; Thu, 5 Jun 1997 16:45:51 -0400
    Message-ID: <[email protected]>
    Priority: Normal
    To: Forte User Group <[email protected]>
    Cc: "Richards," Debbie <[email protected]>,
    "Gardner," Steve <[email protected]>
    MIME-Version: 1.0
    From: Dale "V." Georg <[email protected]>
    Subject: Transactions and Locking Rows for Update
    Date: Thu, 05 Jun 97 16:48:37 PDT
    Content-Type: text/plain; charset=US-ASCII; X-MAPIextension=".TXT"
    Content-Transfer-Encoding: quoted-printable
    Sender: [email protected]
    Precedence: bulk
    Reply-To: Dale "V." Georg <[email protected]>

  • Transactions and Locking Rows for Update

    I have a problem in the application I am currently working on, which it
    seems to me should be easily solvable via appropriate use of transactions
    and database locking, but I'm having trouble figuring out exactly how to
    do it. The database we are using is Oracle 7.2.
    The scenario is as follows: We have a window where the user picks an
    object from a dropdown list. Some of the object's attributes are then
    displayed in that window, and the user then has the option of editing
    those attributes, and at some point hitting the equivalent of a 'save' button
    to write the changes back to the database. So far, so good. Now
    introduce a second user. If user #1 and user #2 both happen to pull up
    the same object and start making changes to it, user #1 could write back
    to the database and then 15 seconds later user #2 could write back to the
    database, completely overlaying user #1's changes without ever knowing
    they had happened. This is not good, particularly for our application
    where editing the object causes it to progress from one state to the next,
    and multiple users trying to edit it at the same time spells disaster.
    The first thing that came to mind was to do a select with intent to update,
    i.e. 'select * from table where key = 'somevalue' with update'. This way
    the next user to try to select from the table using the same key would not
    be able to get it. This would prevent multiple users from being able to
    pull the same object up on their screens at the same time. Unfortunately,
    I can think of a number of problems with this approach.
    For one thing, the lock is only held for the duration of the transaction, so
    I would have to open a Forte transaction, do the select with intent to
    update, let the user modify the object, then when they saved it back again
    end the transaction. Since a window is driven by the event loop I can't
    think of any way to start a transaction, let the user interact with the
    window, then end the transaction, short of closing and re-opening the
    window. This would imply having a separate window specifically for
    updating the object, and then wrapping the whole of that window's event
    loop in a transaction. This would be a different interface than we wanted
    to present to the users, but it might still work if not for the next issue.
    The second problem is that we are using a pooled DBSession approach
    to connecting to the database. There is a single Oracle login account
    which none of the users know the password to, and thus the users
    simply share DBSession resources. If one user starts a transaction
    and does a select with intent to update on one DBSession, then another
    user starts a transaction and tries to do the same thing on the same
    DBSession, then the second user will get an error out of Oracle because
    there's already an open transaction on that DBSession.
    At this point, I am still tossing ideas around in my head, but after
    speaking with our Oracle/Forte admin here, we came to the conclusion
    that somebody must have had to address these issues before, so I
    thought I'd toss it out and see what came back.
    Thanks in advance for any ideas!
    Dale V. Georg
    Indus Consultancy Services [email protected]
    Mack Trucks, Inc. [email protected]
    [email protected]------------------

    I have a problem in the application I am currently working on, which it
    seems to me should be easily solvable via appropriate use of transactions
    and database locking, but I'm having trouble figuring out exactly how to
    do it. The database we are using is Oracle 7.2.
    The scenario is as follows: We have a window where the user picks an
    object from a dropdown list. Some of the object's attributes are then
    displayed in that window, and the user then has the option of editing
    those attributes, and at some point hitting the equivalent of a 'save' button
    to write the changes back to the database. So far, so good. Now
    introduce a second user. If user #1 and user #2 both happen to pull up
    the same object and start making changes to it, user #1 could write back
    to the database and then 15 seconds later user #2 could write back to the
    database, completely overlaying user #1's changes without ever knowing
    they had happened. This is not good, particularly for our application
    where editing the object causes it to progress from one state to the next,
    and multiple users trying to edit it at the same time spells disaster.
    The first thing that came to mind was to do a select with intent to update,
    i.e. 'select * from table where key = 'somevalue' with update'. This way
    the next user to try to select from the table using the same key would not
    be able to get it. This would prevent multiple users from being able to
    pull the same object up on their screens at the same time. Unfortunately,
    I can think of a number of problems with this approach.
    For one thing, the lock is only held for the duration of the transaction, so
    I would have to open a Forte transaction, do the select with intent to
    update, let the user modify the object, then when they saved it back again
    end the transaction. Since a window is driven by the event loop I can't
    think of any way to start a transaction, let the user interact with the
    window, then end the transaction, short of closing and re-opening the
    window. This would imply having a separate window specifically for
    updating the object, and then wrapping the whole of that window's event
    loop in a transaction. This would be a different interface than we wanted
    to present to the users, but it might still work if not for the next issue.
    The second problem is that we are using a pooled DBSession approach
    to connecting to the database. There is a single Oracle login account
    which none of the users know the password to, and thus the users
    simply share DBSession resources. If one user starts a transaction
    and does a select with intent to update on one DBSession, then another
    user starts a transaction and tries to do the same thing on the same
    DBSession, then the second user will get an error out of Oracle because
    there's already an open transaction on that DBSession.
    At this point, I am still tossing ideas around in my head, but after
    speaking with our Oracle/Forte admin here, we came to the conclusion
    that somebody must have had to address these issues before, so I
    thought I'd toss it out and see what came back.
    Thanks in advance for any ideas!
    Dale V. Georg
    Indus Consultancy Services [email protected]
    Mack Trucks, Inc. [email protected]
    [email protected]------------------

  • Limitations on ServiceObjects

    Hi,
    We have a CallCenter application on Forte (version 3.0.G.2) and the way it
    has been designed we have to add new serviceobject for every new database
    table we add. Since we expect to have to have more tables in future I would
    like to know whether there is limitations on number of service objects Forte
    can handle. Any input would help.
    Thanks in advance

    Hi,
    First, you may look at your Database for optimization (more relationnal).
    Then... you have some limitations on service objects linked to your ressources
    (network, cpu, memory) and to their properties (visibility and dialog duration
    have an influence on the protocol). The more service objets you will have, the
    more ressources you will need on your workshop for instance (I had the case in
    Forte R1, with 50 Service objects). This case is less visible now, may be due to
    Forte optimization and increase of the power of the materials. That's the main
    reason to design an architecture with managers of services. Only managers will
    become service objects. One way to limit the number of DBSession service objects
    is to use dynamic DBsessions through a DBSession Manager (very usefull if you
    have multi-threaded access to the Database). With SQL Server you are
    multi-threaded, but you will need one single DBSession for one single Task. Same
    case with Oracle on NT. But, don't forget that your database design is not
    extensible to infinity : each DBSession is a cost for the Database...
    Then one solution to optimize the response time between Forte and the Database
    is to use a cache for Sql Statements linked to your DBSessions (each DBSession
    should have its own cache). Forte manages it's own cache on DBSession
    automatically for the SQL you put in your TOOL code. It is usefull if you have
    less Sql Statement than the cache size and if you use always the same SQL
    statements. In that case you should manage fixed DBSessions to fixed Data
    Services (but you can share the same DBSession for several Data Services).
    Otherwise, you can managed it youself. Launch your prepare statement once and
    then reuse the statement for n executes (DBSession.PrepareStatement and
    DBSession.Execute). If you use Express for instance, it is possible to build
    this easily reusing Express SqlStatement cache. Beware that if you loose your
    connection to your database on a DBSession, you should clear you cache
    (statements are not reusable) and you should reconnect to the Database yourself
    (using DBSession.Reconnect). Don't forget distributed transactions and use
    explicit transactions especially when you use cursors (if you don't, each fetch
    will use a new transaction).
    Hope this helps,
    Daniel Nguyen
    Freelance Forte Consultant
    http://perso.club-internet.fr/dnguyen/
    Naikar, Sudhir a &eacute;crit:
    Ajit,
    The reason we are having a serviceobject for each table is though we use
    SQL_SERVER as database some of the tables are loosely tied and they are not
    completely relational. Some of the tables like Ex. Look up table which has
    data for dropdown list (data in dropdown list is dynamic) and is hanging in
    database alone. To access this data for the dropdown list we have to create
    a serviceobject. We have many situations similar to this one. We keep
    getting requirements like this and that is reason I was thinking about
    limitation on service objects or is there a way we can address the problem?
    Thanks
    Sudhir Naikar
    -----Original Message-----
    From: Ajith Kallambella [SMTP:[email protected]]
    Sent: Friday, October 15, 1999 9:15 AM
    To: [email protected]; [email protected]
    Subject: Re: (forte-users) Limitations on ServiceObjects
    Naikar,
    Can you not have one( or few )SOs, with multiple
    DBSessions to group the tables? Can I ask
    you if there is a compelling reason to have one
    ServiceObject for each table?
    Though there may not be any hard issues
    inhibiting your approach, you should consider the caveats.
    It is not a bad design, but doesnot deliver a very
    clean design either.
    The service object startup sequence might become an issue,
    for interdependent SOs belonging in different partitions.
    You will have to have a lot of synchronization code to make
    sure all the SOs come up in the required order.
    Because you have one SO for each table, you will eventually
    have to access one SO from another( unless there is no
    business requirement to operate on multiple tables ).
    Your transactions involving multiple SOs, with one
    calling another, and the the presence of routers
    with load-balanced partitions might present you with
    a very complex scenario where you just cannot assume
    successive calls are being routed to same partition.
    A few days ago, there was a discussion about
    this on the forum.
    Bottom line - you may end up writing a lot of
    code to do repititive sanity-checking so that the
    integrity of the system is not compromised.
    The real question here is,
    is it worth all the extra effort?
    Ajith Kallambella M.
    Forte System consultant.
    From: "Naikar, Sudhir" <[email protected]>
    To: "'[email protected]'" <[email protected]>
    Subject: (forte-users) Limitations on ServiceObjects
    Date: Fri, 15 Oct 1999 08:46:11 -0400
    Hi,
    We have a CallCenter application on Forte (version 3.0.G.2) and the wayit
    has been designed we have to add new serviceobject for every newdatabase
    table we add. Since we expect to have to have more tables in future Iwould
    like to know whether there is limitations on number of service objects
    Forte
    can handle. Any input would help.
    Thanks in advance
    For the archives, go to: http://lists.sageit.com/forte-users and use
    the login: forte and the password: archive. To unsubscribe, send in a new
    email the word: 'Unsubscribe' to: [email protected]
    For the archives, go to: http://lists.sageit.com/forte-users and use
    the login: forte and the password: archive. To unsubscribe, send in a new
    email the word: 'Unsubscribe' to: [email protected]
    For the archives, go to: http://lists.sageit.com/forte-users and use
    the login: forte and the password: archive. To unsubscribe, send in a new
    email the word: 'Unsubscribe' to: [email protected]

  • Unable to Log In After Software Update

    Hello:
    After running Software Update, I am unable to log in.
    I found the following support article:
    Mac OS X 10.5: Unable to log in after an upgrade install
    Article: TS1543
    http://support.apple.com/kb/TS1543?viewlocale=en_US
    However, upon step 8 of the instructions, the following error appears:
    delete Invalid Path
    <dscl_cmd> DS Error: -14009 (eUnknownNodeName)
    This is actually after an update (not an "upgrade"), so I'm not even sure these are the appropriate steps.
    This is the second time that I have been unable to log in after an update. The first time, I attempted to reset the password with the installation DVD; however, this resulted in "No LDAP Master" errors and I installed Mac OS X 10.5 Leopard Server on a second internal drive and configured everything again. After setting everything up on the second install, I was going to reformat the first boot drive. Unfortunately, I'm just back to where I started after running the updates. I'm not sure if having two volumes with Mac OS X Server installed makes a difference in the instructions in the article mentioned above or not.
    Another question: Is resetting the password with the installer DVD the same as using the Single User approach ("launchctl load /System/Library/LaunchDaemons/com.apple.DirectoryServices.plist", "ls /Users", then "dscl . -delete /Users/username AuthenticationAuthority")?
    If so, I'll just do that; however, I am hoping that the Single User approach might somehow avoid "No LDAP Master" errors again.
    Any suggestions are greatly appreciated.
    -Warren

    baltwo:
    Thanks! I think a week would have gone by before I realized this on my own.
    -Warren

  • Is there a way to create or use a "slide viewed" variable?

    Hi, I would like to write a conditional action based on whether or not a learner has visited slide. I realize I could write an individual variable for every single slide, but that would be a very slow and manual way to set this up. From a logic perspective, I wanted to set up a single project action that would look something like this.
    If "slide viewed" = no, then do this...
    If "slide viewed" = yes, then don't do anything.
    Theoretically, it would seem possible since you can show a checkmark in the TOC next to a visited slide, but I can't quite figure out how to apply this to my project actions.
    Is something like this possible in Captivate 6 or 7?

    Disclaimer: I've only tried this with a couple slides, with very little error checking, but here's a single action approach I came up with.
    First I created a tracker variable. Then I labelled each slide with a single, distinct letter/number/character. (Not sure if there are any characters you should stay away from; maybe someone else can tell you which ones might cause problems as a slide name/label). So you might run out of letters/numbers/characters if you have a ton of slides. If you have enough unique, one-letter names, however, this is the logic behind the advanced action:
    If slide_tracker contains cpInfoCurrentSlideLabel
    Then: continue
    Else: do whatever you want it to do
    slide_tracker = slide_tracker + cpInfoCurrentSlideLabel
    So basically, if we've already been to this slide, the slide label will already been appended to our tracker variable. Thus it will contain the unique letter/symbol/number, and will do nothing (continue). If we haven't been to this slide, then we can do whatever we want to do and update the tracker to reflect that we've now been to this slide.
    Again, a disclaimer: I spent only five minutes trying this out, so there may be unforseen issues with it.

  • Master page: header changes are not reflected in existing topics

    A few months ago this forum helped me to troubleshoot and fix problems with the Show More | Show Less command that I inherited from a previous writer.  I chose to use the Single Button approach and used this code, on the Master Page, for "onclick":
    <body>
    <?rh-script_start ?><script src="ehlpdhtm.js" type="text/javascript" language="JavaScript1.2"></script><?rh-script_end ?>
    <?rh-region_start type="header" style="width: 100%; position: relative;" ?>
    <table style="height: 10px;" cellspacing="0" width="100%">
      <col width="186" />
      <col width="481" />
      <col width="97" />
      <tr>
       <td><h2><img src="NGP_Logo_small.png" alt="" style="border: none;"
           border="0" /></h2></td>
       <td><h2 style="text-align: right; margin-right: 20px;">Help</h2></td>
       <td><h2><img src="btnshowall.gif" onclick="ShowAll(this)" alt=""
           style="border: none;" border="0" /></h2></td>
    This worked fine until recently when I changed the logo in the left cell of the header table. Today I discovered that the img src = line had gone away. And so I re-entered it, and it works fine on new topics, and on some, but NOT all topics.
    Here's what happens on an older topic:
    Preview the topic.
    Click Show All button. A Script Error message appears: "The value fo the property 'ShowAll' is null or undefined, not a Function object. Code = 0. file = ...../Help/Administration/rlt1F1.htm.
    Here's the HTML code for this topic, which is missing the code btnshowall code.
    <body>
    <?rh-script_start ?><script src="../ehlpdhtm.js" type="text/javascript"
            language="JavaScript1.2"></script><?rh-script_end ?>
    <?rh-placeholder type="header" ?>
    <table cellspacing="0" width="100%">
    <col style="width: 80.663%;" />
    <col style="width: 19.337%;" />
    <tr>
      <td><h1><?rh-variable_start name="title" format="default" showcode="showcode"
              value="Setting Global Defaults" ?>Setting Global Defaults<?rh-variable_end ?></h1></td>
      <td style="vertical-align: bottom;"><p style="text-align: right;
                 margin-bottom: 6pt; line-height: Normal;">&#160;</p></td>
    </tr>
    </table>
    I tried setting the Master Page to None and then re-setting it to Main. That does not help.
    The only workaround I've found so far is to create a new topic, copy the content from the old topic, delete the old topic, and rename the new topic using the old topic's name. But that is very tedious.
    Any thoughts on how to fix the problem without recreating every topic?
    Thank you.
    Carol

    A few months ago this forum helped me to troubleshoot and fix problems with the Show More | Show Less command that I inherited from a previous writer.  I chose to use the Single Button approach and used this code, on the Master Page, for "onclick":
    <body>
    <?rh-script_start ?><script src="ehlpdhtm.js" type="text/javascript" language="JavaScript1.2"></script><?rh-script_end ?>
    <?rh-region_start type="header" style="width: 100%; position: relative;" ?>
    <table style="height: 10px;" cellspacing="0" width="100%">
      <col width="186" />
      <col width="481" />
      <col width="97" />
      <tr>
       <td><h2><img src="NGP_Logo_small.png" alt="" style="border: none;"
           border="0" /></h2></td>
       <td><h2 style="text-align: right; margin-right: 20px;">Help</h2></td>
       <td><h2><img src="btnshowall.gif" onclick="ShowAll(this)" alt=""
           style="border: none;" border="0" /></h2></td>
    This worked fine until recently when I changed the logo in the left cell of the header table. Today I discovered that the img src = line had gone away. And so I re-entered it, and it works fine on new topics, and on some, but NOT all topics.
    Here's what happens on an older topic:
    Preview the topic.
    Click Show All button. A Script Error message appears: "The value fo the property 'ShowAll' is null or undefined, not a Function object. Code = 0. file = ...../Help/Administration/rlt1F1.htm.
    Here's the HTML code for this topic, which is missing the code btnshowall code.
    <body>
    <?rh-script_start ?><script src="../ehlpdhtm.js" type="text/javascript"
            language="JavaScript1.2"></script><?rh-script_end ?>
    <?rh-placeholder type="header" ?>
    <table cellspacing="0" width="100%">
    <col style="width: 80.663%;" />
    <col style="width: 19.337%;" />
    <tr>
      <td><h1><?rh-variable_start name="title" format="default" showcode="showcode"
              value="Setting Global Defaults" ?>Setting Global Defaults<?rh-variable_end ?></h1></td>
      <td style="vertical-align: bottom;"><p style="text-align: right;
                 margin-bottom: 6pt; line-height: Normal;">&#160;</p></td>
    </tr>
    </table>
    I tried setting the Master Page to None and then re-setting it to Main. That does not help.
    The only workaround I've found so far is to create a new topic, copy the content from the old topic, delete the old topic, and rename the new topic using the old topic's name. But that is very tedious.
    Any thoughts on how to fix the problem without recreating every topic?
    Thank you.
    Carol

  • Slow network while running Archlinux

    Hi there,
    Recently (and, that's the weirdest part, recently only) the network have been really slow. From what I see when I'm running Windows, I assume I actually only get a fraction of the speed I should get when I'm running ArchLinux.
    For instance, when I'm alone at home, I have an acceptable connection with ArchLinux (though I still can't play online games : too much lag. I have to boot on Windows for that).
    And when some roommates are here, I have an utterly slow connection which makes it barely possible to load a page from ArchLinux's wiki (though I can do it with Windows with no problem, and can still play online games with a bit of lag).
    It's been lasting for the past three months, so I'm pretty sure it's no coincidence : there's something wrong.
    I've been reading some pieces of the wiki and tried to find what's wrong with my configuration. I did find that the hostname from the rc.conf and hosts files were different, but changing that didn't solve the network problem.
    Here is my configuration :
    /etc/rc.conf
    # /etc/rc.conf - Main Configuration for Arch Linux
    # LOCALIZATION
    # LOCALE: available languages can be listed with the 'locale -a' command
    # DAEMON_LOCALE: If set to 'yes', use $LOCALE as the locale during daemon
    # startup and during the boot process. If set to 'no', the C locale is used.
    # HARDWARECLOCK: set to "", "UTC" or "localtime", any other value will result
    # in the hardware clock being left untouched (useful for virtualization)
    # Note: Using "localtime" is discouraged, using "" makes hwclock fall back
    # to the value in /var/lib/hwclock/adjfile
    # TIMEZONE: timezones are found in /usr/share/zoneinfo
    # Note: if unset, the value in /etc/localtime is used unchanged
    # KEYMAP: keymaps are found in /usr/share/kbd/keymaps
    # CONSOLEFONT: found in /usr/share/kbd/consolefonts (only needed for non-US)
    # CONSOLEMAP: found in /usr/share/kbd/consoletrans
    # USECOLOR: use ANSI color sequences in startup messages
    LOCALE="en_US.UTF-8"
    DAEMON_LOCALE="no"
    HARDWARECLOCK=""
    TIMEZONE="Europe/Madrid"
    KEYMAP="es"
    CONSOLEFONT="iso01-12x22"
    CONSOLEMAP=
    USECOLOR="yes"
    # HARDWARE
    # MODULES: Modules to load at boot-up. Blacklisting is no longer supported.
    # Replace every !module by an entry as on the following line in a file in
    # /etc/modprobe.d:
    # blacklist module
    # See "man modprobe.conf" for details.
    MODULES=()
    # Udev settle timeout (default to 30)
    UDEV_TIMEOUT=30
    # Scan for FakeRAID (dmraid) Volumes at startup
    USEDMRAID="no"
    # Scan for BTRFS volumes at startup
    USEBTRFS="no"
    # Scan for LVM volume groups at startup, required if you use LVM
    USELVM="no"
    # NETWORKING
    # HOSTNAME: Hostname of machine. Should also be put in /etc/hosts
    HOSTNAME="localhost"
    # Use 'ip addr' or 'ls /sys/class/net/' to see all available interfaces.
    # Wired network setup
    # - interface: name of device (required)
    # - address: IP address (leave blank for DHCP)
    # - netmask: subnet mask (ignored for DHCP) (optional, defaults to 255.255.255.0)
    # - broadcast: broadcast address (ignored for DHCP) (optional)
    # - gateway: default route (ignored for DHCP)
    # Static IP example
    # interface=eth0
    # address=192.168.0.2
    # netmask=255.255.255.0
    # broadcast=192.168.0.255
    # gateway=192.168.0.1
    # DHCP example
    # interface=eth0
    # address=
    # netmask=
    # gateway=
    interface=
    address=
    netmask=
    broadcast=
    gateway=
    # Setting this to "yes" will skip network shutdown.
    # This is required if your root device is on NFS.
    NETWORK_PERSIST="no"
    # Enable these netcfg profiles at boot-up. These are useful if you happen to
    # need more advanced network features than the simple network service
    # supports, such as multiple network configurations (ie, laptop users)
    # - set to 'menu' to present a menu during boot-up (dialog package required)
    # - prefix an entry with a ! to disable it
    # Network profiles are found in /etc/network.d
    # This requires the netcfg package
    #NETWORKS=(main)
    # DAEMONS
    # Daemons to start at boot-up (in this order)
    # - prefix a daemon with a ! to disable it
    # - prefix a daemon with a @ to start it up in the background
    # If something other takes care of your hardware clock (ntpd, dual-boot...)
    # you should disable 'hwclock' here.
    DAEMONS=(hwclock @syslog-ng dbus !network networkmanager dhcpd ifplugd netfs @crond @sshd)
    /etc/resolv.conf
    # Generated by NetworkManager
    nameserver 62.42.230.24
    nameserver 62.42.63.52
    /etc/hosts
    # /etc/hosts: static lookup table for host names
    #<ip-address> <hostname.domain.org> <hostname>
    127.0.0.1 localhost.localdomain localhost
    ::1 localhost.localdomain localhost
    # End of file
    I tried commenting either of the DNS servers, without any changes.
    I also tried with two different network daemons, NetworkManager and Wicd.
    Is there something wrong with my configuration ? Is there something I can do to find out where does this problem may come from ?

    Re-read this section on rc.conf and also cross-refer with the various items such as netcfg, as well as taking note of the single network approach and what you should do in #2 re: network daemon vs. your config with networkmanager:
    https://wiki.archlinux.org/index.php/Rc.conf#Networking
    Take a look at pdnsd as an alternative way to configure DNS lookups, also check out OpenDNS and Google's DNS as potential alternative nameservers so that you don't rely on your ISP's services.
    https://wiki.archlinux.org/index.php/Pdnsd
    http://www.opendns.com/
    http://code.google.com/speed/public-dns/
    Let us know if that makes it better.
    NB: The DNS bit is an alternative approach that does not necessarily solve your issues though might be a desirable configuration.

  • Large form set: Deliver in portfolio, or build one big PDF?

    Greetings, all--
    I am using LiveCycle to recreate a set of 24 or so forms that were originally built in Word. My users download the forms, complete them on their desktops, then print and submit the forms in hard copy. Most users are not tech savvy. The forms must be very easy for them to access, open, navigate, and print. Most also function within severe hardware limitations (slow Internet connections, old machines, etc.). On my end, I have a rusty at best command of scripting and am new to LiveCycle. I know what I want my forms to do but am very slow at figuring out how to make those things happen.
    I am working first to decide on how to deliver these forms. It looks like I will have to build one big PDF to get the form behavior I want (autopopulation of like fields, field calculation, etc.). Does that sound right? Or is it possible to set up each file as a separate PDF and package them in a portfolio and still get cross-file behavior like auto field population, sequential page numbering, etc.?
    Any feedback on setting up big set of files would be much appreciated.
    Thank you,
    Virginia

    Hi,
    For what it is worth...
    First of all consider what version of Acrobat/Reader the user population will have. Improved functionality comes with each new version. So you might end up including features in your form that won't work on the users' PC. In LC Designer 8.2 you are able to define the target version (in File/Form Properties/Defaults) and then check in the Warnings tab to make sure that the form will run OK in that version (eg no warnings);
    If people don't have to save the form (or the data that they have typed in) then you don't have to worry about Reader Enabling the form. This gives the ability to users with Reader to save that (Reader Enabled) form. Useful if they are filling in the same form regurarly.
    The implementation of Portfolios in version 9 is very good. However if users have older versions of Acrobat/Reader then it will revert to the previous implementation of Packages (less graphical) and users will get warning messages.
    Keeping the forms separate will help performance; but may make it more difficult for users to locate the correct form. Creating one large form in LC is possible (and will make it very easy to share values across the 24 forms becuase they will be in the one XFA PDF); however if each form has multiple pages and there is dynamic hide/visible script then performance may be a problem. If the form is static (eg does not grow) then performance will not be as badly affected with the single form approach.
    There is a work around to get forms to talk to each other (whether in a portfolio or not), but it requires a good bit of scripting and a cool head.
    In summary, if you are working with static forms that will not grow (fields not extending to accomodate overflowing text) then I would go with one form.
    You can always develop the single form for the time being and then at a later stage break it out into 24 separate forms.
    Good luck,
    Niall

  • SSIS best practice on importing External text files

    Hi -
    I am a fairly seasoned SSIS/ETL developer and I am struggling with the best architecure on how to import vendor files into a shared database.  I see there being 2 methods with in importing files, and I'm really wanting input from senior
    level SSIS developers on their thoughts of importing vendor files, I see there being 2 methods:
    - Set up the ETL to match the format of the file, and if the format is invalid, the entire ETL is unable to process the file, various error handling will be set up to log that the file import failed, but at this point, the ETL can no longer continue
    - Set up the ETL to just take a text file, NOT looking at the format within the file, bring all of the data into ONE column, and then parse through the data, using the given file delimter and log any issues at that point
    I have done both methods and I think there are advantages and disadvantages to both.  Hopefully I explained that well enough. Can anyone give me their thoughts, suggestions, experience, etc. on these 2 approaches of importing a file?  Any input
    is greatly appreciated.
    Thanks!
    Jenna G

    It depends on how much control you've on the source end. If you can ,best thing always would be to prefix the metadata and create your package based on that. Any violation should be flagged as an error and reported back for correction with row/column details
    etc.
    If you've no way to fix the source, then only go for single column approach. But this can still prove to be a nightmare if the source format keeps on changing. Giving that flexibility to users can prove to be costly sometimes and you may have to spend quite
    a bit of time trying to fix inconsistencies in the source.
    So in my opinion best would be the former approach. Only thing is the format (matadata) has to be fixed only after sufficient discussions with all involved and exception handling also has to be agreed upon.
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

Maybe you are looking for

  • Problems using Burn

    I've been trying to burn a video DVD using Burn but keep getting errors. The error log says it fails to set up DVD xml/menu or something to the effect. It goes through converting the file, but quits while writing the disc.

  • How to save a video with out loosing it.

    So I just notice that when I record and don't save it one time and leave it nameless. When I get back to it ill save under a new name and this message pops up say something and when I get   back to it with a new name it says unreadable. how can I pre

  • PGW2200 Standby still Platform:OOS after failover

    Pardon me for being a rookie CSC user.  We've got a Active/Standby server setup and had a failover but the standby never came back up.  We've power cycled (since didn't have root passwd handy at first) and then reinstalled SW.  Seems there's some goo

  • Domain Fixed values.................

    Hi, can any body tell me what are the advantages and dis-advantages of giving FIXED VALUES to DOMAIN. Thanks. can any body help in this.......... Message was edited by: Deepak333 k

  • Joining two facts

    Hi, I have two facts. They have a parent child relationship. We first thought of creating a degenerate dimension with the parent fact. However, I think with degenerate dimensions 1 to N joins do not work. Is there any other way to accomplish this, ot