High Level Thread Implementation Questions

Hi,
Before I take the plunge and program my software using threads, I have a few high-level questions.
I plan on having a simulation class that intantiates software agents, each with different parameters. There is an agent class, with constructor, methods etc. Each agent has a sequence to go through. Once completed, the iteration number is increased and the sequence is repeated. That's simple enough to do.
The question is, is it worth executing each agent on a different thread?
If there is around 500 - 1000 lines of code (crude measurement, I know) how many can I expect to thread efficiently?
One parameter allows an agent to execute n cycles for each global iteration. (i.e. in one iteration, agent A runs once, agent B runs 5 times). Could this be a problem? Should this be controlled outside the agent, or inside it?
Can I write the code without having to worry about threading, or do I have to design the agent code with threading in mind?
Will they really run in parallel? It is important that there is no bias to the execution order. I can solve this messily without using threads by randomising the execution order - but that is a messy work around - and why I'm looking at threads.
Can threaded objects interact easily with non threaded one when execution order is important?
Are there any other points that I should consider?
Thanks in advance - any information before I enter this unchartered territory will be truly appreciated!!

I think you are better off running this all in a single thread.
Threads make no guarantee as to scheduling. Threads do not increase efficiency (unless your agents block on i/o, or sleep). Threads come with an overhead cost.
Threads don't guarantee no bias to execution order.
Threads require synchronization to ensure safe interaction between each other. This is a bit of extra work, and can be a bitch if you're not familiar with it.
Yes, threads run in parallel. If you have multiple processors then they can truly run in parallel, otherwise they run in time slices.

Similar Messages

  • Data Level Security implementation question

    I had a quick data-level security scenario and wanted to solicit any input from the experts.
    In our current Subject Area we have one Presentation Layer using one Business Model. In this Subject Area have a Task and Employee Dimension. There is row-level Security on the Task Dimension that is done in the Business Model on the LTS Content tab. There are a batch of reports built off this Subject Area.
    There is now a request to build a new batch of reports, however, they want to now filter on the Employee Table and NOT filter on the Tasks. So the opposite of what has been applied above.
    From my perspective there are only a few ways this Security can be applied
    Business Layer: Basically either create an Alias of Employee and Task or build a second LTS for both. Then create new columns and map to these accordingly. Basically have 2 of each column in the Business Layer. One with Security applied and one without.
    Presentation Layer: Created a second Presentation Subject Area and apply the security at the Presentation Layer and remove it from the Business Layer.
    I know a third option could be put security on the Role/Group but for this case these reports are open to everyone.
    I'd just like to verify from the experts that I may have covered all solutions for this scenario or if there are any other suggestions?
    Thanks!

    Alright...
    If you have two LTS say A & B (basically duplicate) then add a column say LTS Indicator and assign 'A' for LTS A and 'B' for LTS B. Add the fragmentation content and apply the security filter and you can also create two different Presentation folders under same Subject Area if users have Answers Access so that the users know if they are querying for LTS A or LTS B.
    Similarly, build your reports making use of LTS indicators which will BI server to pick correct LTS. Say, where you want LTS A to be picked...use filter of LTS Indicator = 'A' and thats it.

  • Question about java thread implementation

    Hi All,
    I am comparing the performance of the Dining philosopher's problem
    implemented in Java, Ada, C/Pthread, and the experimental language I
    have been working on.
    The algorithm is very simple: let a butler to restrict entry to the
    eating table, so that deadlock is prevented.
    It turns out that the java code is the winner and it is 2 times faster than C!
    The comparison result really surprised me, and raised a big
    question mark : why java runs so fast? I did not use high-level synchronization
    constructs like Semaphores, atomic variables, etc.
    I vaguely recall that Java thread is actually implemented by the underlying
    system thread library(so on linux, the default would be NPTL, on windows NT threads,
    and on Mac OSX it would be Cthread?). Can no longer remember where I read that.
    Does anyone here have some notions about the Java thread
    implementations(where is this formally explained)? or Does anyone know where
    I can possibly find relevant literature or the answer?
    thanks a lot.
    cheers,
    Tony

    Peter__Lawrey wrote:
    google has lots of information on java threaded.
    One thing java does is support biased locking. i.e. the thread which last locked an object can lock that object again faster.
    This may explain the difference. [http://www.google.co.uk/search?q=java+usebiasedlocking]
    Note: you can turn this option off and other locking options to see if a feature is giving you the performance advantage.
    Personally I have found that real world multi-threaded applications are easier to write in Java, esp. when you have a team of developers. For this reason, alot of C/C++ libraries are single threaded, even when there would be a performance advantage in being multi-threaded (because its just too hard in reality to make it thread safe and faster because of it)I didn't know that, very interesting :-)

  • Very High-Level Question on Approach (PI ccBPM or CE7.1?)

    All -
    I have a very high-level question. I am not current on what would be the appropriate tool for my scenario.
    My scenario...
    Transitioning business to use new sales order system.
    This will be a phased roll-out, not Big Bang.
    Company ordering website needs to send orders to one of two systems (new and old).
    Assume order has a flag or some other value that is an indicator to which backend application it should be sent.
    Each recieving application has an order insert web service, which would be in PI 7.1's ESR.
    What we "envision" is that there would be some kind of fronting web/enterpise service that would read the "flag", then pass the message to the appropriate web service (and on to the appropriate application).
    But I really am not clear how to architect this, if any rules "engines" (ccBPM or in CE 7.1) would be  or should be used.
    You opinions are welcome...
    Thank you for your time...

    Hi Eric,
    It would be more elegant , if you can classify this orders logically and provide two user interfaces one for each type.
    But coming back to your question , if you are planning to implement the process in BPM , you could utilize the capabilities of BRM for this purpose . BRM comes with CE 7.1 and works in conjunction with BPM.
    [BRM Help|https://www.sdn.sap.com/irj/sdn/nw-rules-management]
    Once you want a complete roll out , you can remove the BRM  decision making.Logically this would solve the purpose. ccBPM is not really positioned for this approach. It can be used when you need to interact with multiple systems using different protocols.
    Caveat : The use of BPM will restrict your UI to WebDynpro Java (atleast for now). The whole process needs to be built around webservices or RFCs.
    Regards
    Bharathwaj

  • Basic  XML Publisher Question: How to access tags in the higher levels?

    Hi All,
    We have a basic question in XML Publisher.
    We have a xml hierarchy like below:
    <CD_CATALOG>
    <CATALOG>
    <CAT_NAME> CATALOG 1</CAT_NAME>
    <CD>
    <TITLE>TITLE1 </TITLE>
    <ARTIST>ARTIST1 </ARTIST>
    </CD>
    <CD>
    <TITLE> TITLE2</TITLE>
    <ARTIST>ARTIST2 </ARTIST>
    </CD>
    </CATALOG>
    <CATALOG>
    <CAT_NAME> CATALOG 2</CAT_NAME>
    <CD>
    <TITLE>TITLE3 </TITLE>
    <ARTIST>ARTIST3 </ARTIST>
    </CD>
    <CD>
    <TITLE> TITLE4</TITLE>
    <ARTIST>ARTIST4 </ARTIST>
    </CD>
    </CATALOG>
    </CD_CATALOG>
    We need to create a report like below:
    CATALOG_NAME     CD_TITLE     CD_ARTISTCATALOG 1     TITLE1     ARTIST1
    CATALOG 1     TITLE2     ARTIST2
    CATALOG 2     TITLE3     ARTIST3
    CATALOG 2     TITLE4     ARTIST4
    So we have to loop at the level of <CD> using for-each CD. But when we are inside this loop, we cannot access the value of CAT_NAME which is at a higher level.
    How can we solve this?
    Right now, we are using the work-around of set_variable and get_Variable. We are setting the value of CAT_NAME inside an outer loop, and using it inside the inner loop using get_variable.
    Is this the proper way to do this or are there better ways to do this? We are running into troubles when the data is inside tables.

    you can use
    <?../CAT_NAME?>copy past to your template
    <?for-each:CD?> <?../CAT_NAME?> <?TITLE?> <?ARTIST?> <?end for-each?>

  • High-level/scripting languages learning thread

    Hi all,
    In recent weeks i have looked into many of the high-level/scripting languages.  All of them easy enough to get into quickly. My problem though is not learning them actually, but that i don't actually have much use now. Sure, from time to time i need a little script for something (and sometimes i then translate that script to lots of languages just for the heck of it like here), but that doesn't amount to much. However on the other hand i'm neither in some job regarding IT/programming nor do i study anything with respect to programming, and i also am not interested in more programming as in compiled languages, system programming or things like that. (At the very least not yet). So i'm doing this just for fun and learning (two of my lifegoals). I am aware of for example Project Euler, however i'm not mathematically interested enough for that.
    So, the purpose of this thread are two things.
    a) I'm asking for suggestions for interesting things i could do with high-level/scripting languages, maybe someone knows of something Project Euler like but for more mundane things and not maths.
    b) So as to give this thread another purpose and not make it only about me, maybe people who have some problem writing a script for something can ask for help. I know of the other thread (the long one, "commandline utilites/scripts"), but that one seems to be more of the sort where someone posts a script he/she uses and then maybe someone posts an answer to that. So for this thread here people should be able to ask for help while creating the script, or even "Where to start". This could serve both the people with the problem and the people wanting to learn more about some language but not finding a way to apply the learning.
    Ogion

    a) I'm asking for suggestions for interesting things i could do with high-level/scripting languages, maybe someone knows of something Project Euler like but for more mundane things and not maths.
    To me, this sounds like the Python Challenge: http://www.pythonchallenge.com/
    Also, if you're not interested in math, maybe you might still find yourself engaged by something like natural language processing, games, or simulations? I personally find the "Natural Language Toolkit" for Python to be a lot of fun.

  • What are the steps (High level) needed to implement US Payroll?

    Hi,
    What are the (high level) steps needed to implement US Payroll?
    Please provide any relavent URLs for any steps if possible.
    Thanks,
    Prabakar

    Transaction Code:-OH00
    Personnel Management Personnel AdministrationOrganizational Data Organizational Assignment Define employee attributes
    Transaction Code:-OH00
    Personnel Management Personnel AdministrationOrganizational Data Organizational Assignment Define employee attributes
    Transaction Code:-OH00
    Personnel Management Personnel AdministrationOrganizational Data Organizational Assignment Define employee attributes
    Transaction Code:-OH00
    Personnel Management Personnel AdministrationOrganizational Data Organizational Assignment Create payroll area
    Transaction Code:-OH00
    Personnel Management Personnel AdministrationOrganizational Data Organizational Assignment Create payroll area
    Transaction Code:-PE03
    Personnel Management Personnel AdministrationOrganizational Data Organizational Assignment Check Default Payroll Area
    Transaction Code:-PA03
    Personnel Management Personnel AdministrationOrganizational Data Organizational Assignment Create control record
    Transaction Code:-PA03
    Personnel Management Personnel AdministrationOrganizational Data Organizational Assignment Create control record
    Transaction Code:-OH00
    Personnel Management Personnel AdministrationPayroll DataBasic PayDefine EE Sub Group Grourping for PCR and Collective Agreement Provision
    Transaction Code:-OH00
    Personnel Management Personnel AdministrationPayroll DataBasic PayDefine Reason for Change
    Transaction Code:-OH00
    Personnel Management Personnel AdministrationPayroll DataBasic PayCheck PayScale Type
    Transaction Code:-OH00
    Personnel Management Personnel AdministrationPayroll DataBasic PayCheck PayScale Area
    Transaction Code:-OH00
    Personnel Management Personnel AdministrationPayroll DataBasic PayCheck Assignment of PayScale Structure to Enterprise Structure
    Transaction Code:-OH00
    Personnel Management Personnel AdministrationPayroll DataBasic PayDetermine Default for PayScale Data
    Transaction Code:-OH00
    Personnel Management Personnel AdministrationPayroll DataBasic PaySetup Payroll Period for Collective Agreement Provision
    Transaction Code:-OH00
    Personnel Management Personnel AdministrationPayroll DataBasic PayDefine PayScale Salary ranges
    Transaction Code:-OH11
    Personnel Management Personnel AdministrationPayroll DataBasic PayWage Types Create Wage Type
    Transaction Code:-OH11
    Personnel Management Personnel AdministrationPayroll DataBasic PayWage Types Create Wage Type
    Transaction Code:-OH00
    Personnel Management Personnel AdministrationPayroll DataBasic PayWage Types Check Wage Type Group u201CBasic Payu201D
    Personnel Management Personnel AdministrationPayroll DataBasic PayWage Types Check Wage Type CatalogCheck Wage Type Text
    Transaction Code:-OH13
    Personnel Management Personnel AdministrationPayroll DataBasic PayWage Types Check Wage Type CatalogCheck Entry Permissibility Per Infotype
    Transaction Code:-OH13
    Personnel Management Personnel AdministrationPayroll DataBasic PayWage Types Check Wage Type CatalogCheck Wage Type Characteristics
    Transaction Code:-OH13
    Personnel Management Personnel AdministrationPayroll DataBasic PayWage Types Check Wage Type CatalogCheck Wage Type Characteristics
    Transaction Code:-OH00
    Personnel Management Personnel AdministrationPayroll DataBasic PayWage Types Employee Sub Group Grouping for Primary Wage
    Transaction Code:-OH00
    Personnel Management Personnel AdministrationPayroll DataBasic PayWage Types Personnel Sub Area Grouping for Primary Wage Type
    Transaction Code:-OH00
    Personnel Management Personnel AdministrationPayroll DataBasic PayWage Types Define Wage Type Permissibility for each PS and ESG
    Transaction Code:-OH00
    Personnel Management Personnel AdministrationPayroll DataBasic PayWage Types Define Wage Type Permissibility for each PS and ESG
    Transaction Code:-OH00
    Personnel Management Personnel AdministrationPayroll DataRecurring Payment and DeductionDefine Reason for Change
    Transaction Code:-OH11
    Personnel Management Personnel AdministrationPayroll DataRecurring Payment and DeductionWage TypesCreate Wage Type Catalog
    Transaction Code:-OH11
    Personnel Management Personnel AdministrationPayroll DataRecurring Payment and DeductionWage TypesCreate Wage Type Catalog
    Transaction Code:-OH00
    Personnel Management Personnel AdministrationPayroll DataRecurring Payment and DeductionWage TypesCheck Wage Type Group u201C Recurring Payments and Deductionu201D
    Transaction Code:-OH13
    Personnel Management Personnel AdministrationPayroll DataRecurring Payment and DeductionWage TypesCheck Wage Type CatalogCheck Wage Type text
    Transaction Code:-OH13
    Personnel Management Personnel AdministrationPayroll DataRecurring Payment and DeductionWage TypesCheck Wage Type CatalogCheck Entry Permissibility Per Infotype
    Transaction Code:-OH13
    Personnel Management Personnel AdministrationPayroll DataRecurring Payment and DeductionWage TypesCheck Wage Type CatalogCheck Wage Type Characteristics
    Transaction Code:-OH13
    Personnel Management Personnel AdministrationPayroll DataRecurring Payment and DeductionWage TypesCheck Wage Type CatalogCheck Wage Type Characteristics
    Transaction Code:-OH00
    Personnel Management Personnel AdministrationPayroll DataRecurring Payment and DeductionWage TypesDefine Employee Sub Group Grouping for Primary Wage Type.
    Transaction Code:-OH00
    Personnel Management Personnel AdministrationPayroll DataRecurring Payment and DeductionWage TypesDefine Personnel Area Grouping for Primary Wage Type
    Transaction Code:-OH00
    Personnel Management Personnel AdministrationPayroll DataRecurring Payment and DeductionWage TypesDefine Wage Type Permissibility for each PS and ESG
    Transaction Code:-OH00
    Personnel Management Personnel AdministrationPayroll DataRecurring Payment and DeductionWage TypesDefine Wage Type Permissibility for each PS and ESG
    Transaction Code:-OH00
    Personnel Management Personnel AdministrationaPPayroll DataAdditional Payments Define Reasons for Changes
    Transaction Code:-OH11
    Personnel Management Personnel AdministrationaPPayroll DataAdditional Payments Wage TypesCreate Wage Type Catalog
    Transaction Code:-OH11
    Personnel Management Personnel AdministrationPayroll DataAdditional Payments Wage TypesCreate Wage Type Catalog
    Transaction Code:-OH00
    Personnel Management Personnel AdministrationPayroll DataAdditional Payments and DeductionWage TypesCheck Wage Type Group Additional Payments
    Transaction Code:-OH13
    Personnel Management Personnel AdministrationPayroll DataAdditional Payments and DeductionWage TypesCheck Wage Type CatalogCheck Wage Type
    Transaction Code:-OH13
    Personnel Management Personnel AdministrationPayroll DataAdditional PaymentsWage TypesCheck Wage Type CatalogCheck Entry Permissibility for Additional Payments
    Transaction Code:-OH13
    Personnel Management Personnel AdministrationPayroll DataAdditional PaymentsWage TypesCheck Wage Type CatalogCheck Wage Type Characteristics.
    Transaction Code:-OH13
    Personnel Management Personnel AdministrationPayroll DataAdditional PaymentsWage TypesCheck Wage Type CatalogCheck Wage Type Characteristics.
    Transaction Code:-OH00
    Personnel Management Personnel AdministrationPayroll DataAdditional PaymentsWage TypesDefine Employee Sub Group Grouping for Primary Wage Type.
    Transaction Code:-OH00
    Personnel Management Personnel AdministrationPayroll DataAdditional PaymentsWage TypesDefine Employee Sub Group Grouping for Primary Wage Type.
    Transaction Code:-OH00
    Personnel Management Personnel AdministrationPayroll DataAdditional PaymentsWage TypesDefine Wage Type Permissibility for each PS and ESG
    Transaction Code:-OH00
    Personnel Management Personnel AdministrationPayroll DataAdditional PaymentsWage TypesDefine Wage Type Permissibility for each PS and ESG
    This is not the complete once just for reference purpose i have given it to u
    Edited by: Sikindar on Nov 19, 2008 9:19 AM

  • Does anyone have a sample implementation plan that can be shared?  High level?

    Does anyone have a sample implementation plan that can be shared?  High level?

    You will probably need to inquire with a VMware consultant to get this kind of information.  VMware depends on these people to make sure they keep the reputation of the software at a very high level.  
    They will have access to various free tools to help large and small scale deployments.  Tools like VMware Health Check Script and the ESX deployment tool.
    If you find this information useful, please award points for
    "correct"
    or "helpful".
    Wes Hinshaw
    www.myvmland.com

  • High-Level JTS/TopLink design question

    I've gone through the "using JTS with TopLink" docs, and it mostly makes sense. However, I still don't understand how TopLink "knows" when I call acquireUnitOfWork() whether or not I'm participating in a distributed 2PC transaction.
    Said another way:
    Let's say I've got an application based on TopLink (registering appropriate JTS stuff) that exposes an API that can be accessed remotely (RMI, SOAP, whatever).
    And, I've got another, separate application using a different persistence-layer technology (also supporting JTS) that also has an API.
    Now, I create a business method that uses the APIs from both of these applications, and I want them to participate in a single, distributed transaction.
    At a high level (source code is unnecessary), how does that work?
    Would the API need to support an ability to specifiy a TransactionContext or is this all handled behind the scenes by the 2 systems registering with the Transaction Service?
    If this is all handled through registration, how do these 2 systems know that these specific calls are all part of the same XA transaction?

    Nate,
    TopLink particiaptes in JTA/JTS transactions but dows not control them. When you configure TopLink to use the JTA/JTS services of the host application server you are deferring TX control to the J2EE container. TopLink will in this case register each acquired UnitOfWork in the current active TX from the container. The container will also ensure that the JDBC connection provided to TopLink is also bound by the active TX.
    In order to get 2PC you must register multiple resources into the same JTA TX. The TX processing during commit will then make the appropriate call backs to the underlying data source as well as the necessary call backs to listeners suchs as TopLink to have its SQL issued against the database.
    In short: The J2EE container manages the 2PC TX and TopLink is just a participant.
    Doug Clarke

  • PL/SQL Ad-hoc Reporting Implementation Questions

    Recently, I have been put on a development team that is developing a small reporting module for one of our applications.
    I'm trying to mask the underlying structure from the application by having the application run PL/SQL procedures which return REF CURSORS to the client with the requested data. Right now I'm struggling with how to implement the PL/SQL in the best fashion.
    Since this is a reporting tool the user has many combinations of selections. For example at a high level this application has a combination of multiple dropdowns (4-6), radio buttons, and check boxes. As you can see this results in a high amount of possible combinations that have to be sent to the database.
    Basically, the user chooses the following:
    1. Columns to receive (ie different SELECT lists in the PL/SQL)
    2. Specific conditions (ie different WHERE clauses in the PL/SQL)
    3. Aggregate functions (SUMS, TOTALS, AVERAGES based on #1 and #2)
    4. Trends based on #3.
    So... with that said I see two possibilities:
    1. Create a static query for each combination of parameters (in this case that would most likely result in at least 300 queries that would have to be written, possibly 600+).
    The problem I see with this is that I will have to write a significant amount of queries. This is a lot of front end work that, while is tedious, could result in a better performing system because it would be a parse once, execute many scenario which is scalable.
    The downside though is that if any of the underlying structure changes I have to go through and change tens of queries.
    2. Use DBMS_SQL and dynamically generate the queries based on input conditions.
    This approach (possibly) sacrifices performance (parse once, execute once situation), but has increased maintainability because it is more likely that I'll have to make one change vice a number of changes in scenario 1.
    A downside to this is that, it may be harder to debug (and hence maintain) because the SQL is generated on the fly.
    My questions to all is:
    1. Which is the approach that would best manage maintainability / performance?
    2. Is there any other approaches to using PL/SQL as a reporting tool that I am not thinking of?
    The database is 10.2.0.3, and the 'application' is PHP 5.1 running on IIS 6.
    If you need me to provide any additional information please let me know.
    Thanks!

    Ref cursors are an ugly solution (different though in 11g).
    You build a dynamic SQL. It must/should have bind variables. But a ref cursor does not allow you to dynamically bind values to bind variables. You need to code the actual bind variables into the code as part of the open cursor command. And at coding time you have no idea how many bind variables there will be in that dynamic SQL for the ref cursor.
    The proper solution is DBMS_SQL as it allows exactly this. Also one of the reasons why APEX uses this for its report queries.
    The only sensible way to implement this type of thing in PL/SQL is by not trying to make it generic (as one could with DBMS_SQL). Instead use polymorphisms and have each procedure construct the appropriate ref cursor with bind variables.
    E.g.
    SQL> create or replace package query as
    2 procedure Emp( c in out sys_refcursor );
    3 procedure Emp( c in out sys_refcursor, nameLike varchar2 );
    4 procedure EMp( c in out sys_refcursor, deptID number );
    5 end;
    6 /
    Package created.
    SQL>
    SQL> create or replace package body query as
    2
    3 procedure Emp( c in out sys_refcursor ) is
    4 begin
    5 open c for select * from emp order by 1;
    6 end;
    7
    8 procedure Emp( c in out sys_refcursor, nameLike varchar2 ) is
    9 begin
    10 open c for select * from emp where ename like nameLike order by 1;
    11 end;
    12
    13 procedure EMp( c in out sys_refcursor, deptID number ) is
    14 begin
    15 open c for select * from emp where deptno = deptID order by sal;
    16 end;
    17 end;
    18 /
    Package body created.
    SQL>
    SQL> var c refcursor
    SQL> exec query.Emp( :c, 'S%' )
    PL/SQL procedure successfully completed.
    SQL>
    SQL> print c
    EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
    7369 SMITH CLERK 7902 1980-12-17 00:00:00 800 20
    7788 SCOTT ANALYST 7566 1987-04-19 00:00:00 3000 20
    SQL>

  • Table_comparison - how to compare data at a high level

    Hi,
    I have to do data validation at a high level between two tables that I am loading.
    I am trying to use table_comparion transform but the problem is that my target table is at a much lower level than at which I want to compare data. So it has many more columns (both key and data fields) than what I want to compare.
    Does the output of query transform ( which I am using as input into table_comparion) be in the exact same format as comparion table? If not, then can somebody suggest me something else.
    Or how can I compare output of two query transforms ?
    Thanks,
    Saurabh Bansal

    Dear Saurabh,
    Not sure if you have already got the solution to this. If yes please close the thread.
    If not, i would suggest you can use the validation rule to compare the two tables and then based on the PASS or FAIL result can check what needs to be done on the output.
    Do post back if you have got the solution or you need any furthur help or else close the question.
    regards,
    Den

  • Errors in the high-level relational engine. The data source view does not contain a definition for the table or view. The Source property may not have been set.

    Hi All,
    I have a cube in which i'm using the TIME DIM that i created in the warehouse. But now i wanted a new measure in the cube which is Average over time and when i wanted to created the new measure i got a message that no time dim was defined, so i created a
    new time dimension in the SSAS using wizard. But when i tried to process the new time dimension i'm getting the follwoing error message
    "Errors in the high-level relational engine. The data source view does not contain a definition for "SSASTIMEDIM" the table or view. The Source property may not have been set."
    Can anyone please tell me why i cannot create a new measure average over the time using my time dimension? Also what am i doing wrong with the SSASTIMEDIM, that i'm getting the error.
    Thanks

    Hi PMunshi,
    According to your description, you get the above error when processing the time dimension. Right?
    In this scenario, since you have updated the DSV, it should have no problem on the table existence. One possibility is that table has been specified for tracking in the notifications for proactive caching, but isn't available any more for some
    reason. Please change the setting in Proactive Caching into "MOLAP".
    Reference:
    How To Implement Proactive Caching in SQL Server Analysis Services SSAS
    If you have any question, please feel free to ask.
    Best Regards,
    Simon Hou
    TechNet Community Support

  • High-level interrupt handler

    Why can I decide to support a high-level interrupt or not? Under what condition does the Solaris kernel will map my hw interrupt (INTA from PCI bus) to a high-level interrupt? When should I refuse to support a high-level interrupt? Why? Can I force my hw interrupt to be a high- level interrupt?
    Also think about that, most hw interrupts indicate something important such as the case buffers are full. If they are assigned below the scheduler's, it really does not make sense.
    Is it possible to block any hw interrupts? Or I'd put it this way can I prioritize hw interrupts in Solaris?
    Thanks
    tyh

    Hi,
    On x86 each IRQ has a software priority assigned to it implicitly by the bus driver, although I think you could override it in the driver.conf. Unlike SPARC, the processor doesn't support a PIL so software priorities are implemented by masking all lower-priority IRQs and re-enabling interrupts.
    High priority interrupts, above dispatcher level, run in the context of the current thread on the cpu, normal level interrupts are handled by interrupt threads.
    The interrupt threads are the highest priority threads on the system, so will preempt any other running threads. In addition mutexes in Solaris use priority inheritance, so the interrupt threads will get to run.
    In general, high level interrupts are allocated to devices with small buffers such as serial or floppy, so that their buffers get serviced in the fastest possible time. Others can afford to wait for just a bit.
    Your driver should check to see if its device has been allocated a high level interrupt. If this is the case, the high level handler should clear the interrupt and save the data/status (in the driver state structure perhaps) and trigger your soft level interrupt handler (which will run as a thread).
    Blocking of interrupts is done for you when you acquire a spin mutex (ie initialised with an iblock cookie). Such a mutex is required to synchronise access to data shared with a high level handler in your driver.
    Please take a look at the Intel Driver writers orientation at:
    http://soldc.sun.com/developer/support/driver/docs/Solaris_driver_models/index.html
    Hope that helps,
    Ralph
    SUN DTS

  • High-level view of steps for 10g OWB-OLAP to Discoverer

    I would greatly appreciate ANY feedback to the following steps. These are not necessarily correct or the best way to do this. I am attempting to take source data, use OWB, create the analytical workspace, and from there have the metadata available and used by Discoverer.
    This is rather high-level, feel free to jump in anywhere.
    We are trying to see if we can get away with NOT using the Analytical workspace manager (AWM) if possible. With that in mind, we are trying to make the most of the process with OWB & OLAP.
    Is this possible to do without ever using the AWM? Can we go end to end (source data--->discoverer final reporting) primarily using OWB to get to the point where we can use the metadata for Discoverer?
    Can anyone relate experiences perhaps that would make me want to consider using the AWM at certain points instead?
    Most importantly, if I do use this methodology, would I be safe after everything has been setup? WOuld I want to consider using AWM at a later point for performance reasons while I am using Discoverer? Or would OWB be helpful as well in some aspects in maintenance of data? Any clue on how often I might need to rebuild, and if so, what to use in that case to minimize time?
    Thanks so much for any insight or opinion on anything I have mentioned!

    Hi Gregory,
    I guess the answer is that this depends. My first question is whether you are looking at a Relational OLAP or Multi Dimensional OLAP solution? This may change the discussion slightly, but just lets look at some thoughts:
    In essence you can use the OWB bridge to generate the AW objects (cubes etc). If you do that (for either ROLAP or MOLAP) you will get the AW objects enabled for querying, using any OLAPI query tool, like BI Beans or the new Discoverer for OLAP. The current OWB release does not run the discoverer enabler (creating views specifically written for EUL support in Disco classic).
    So if you are looking at Disco classic you must use the AWM route...
    The other things that you must be aware off is that the OWB technology is limited to cloning the relational objects for now. This means that you will create a new model based on your existing data. If you want to tweak the objects generated you will probably need to go to the underlying code in either scenario.
    So if you want to create calculated measures for example you could generate a cube with OWB, create a "dummy measure" and add the formula in OLAP DML. The same will go for some other objects you may want to create such as text measures...
    The benefit of creating place holder or dummy measures is that the metadata is completely in order, you simply change the measure's behavior...
    For the future (the beta starts relatively soon) OWB will support much more modeling, like logical cubes and you can then directly deploy to OLAP. Also the mappings are transparent to the storage. So you map to a logical cube and OWB will implement the correct to load either OLAP or relational targets.
    We will also start supporting calculated measures, sparsity definitions, partioning and compression on cubes, as we will support parallel building of cubes.
    Hope this gives you some insight!
    Jean-Pierre

  • ME9F generates authorisation High level risks, why?

    Hi All!
    Please, advise why transaction ME9F - Message Output: Purchase Orders generates High level risks with such transactions as
    MK01     Create vendor (Purchasing)
    MK02     Change vendor (Purchasing)
    XK02     Change vendor (centrally)
    MIGO     Goods movement
    ME54     Release Purchase Requisition
    MI04     Enter Inventory Count with Document
    ME28     Release Purchase Order
    etc. As I understand this transsaction is used for view and printing of output. Why such risks as "Enter Purch Agreements & create/modify fictious Vendor " or "Ability to create a purchase contract and release PO" come up?
    In what conditions such risks can be reasonable?
    Thank you

    Looking through really old posts of mine I saw that I had given this question a brain-dead answer years ago. I seriously doubt the poster is still looking for a solution (I certainly hope not) :) Almost assuredly the problem is because of a mismatch in data types. TRUNC(s1.date1,'DD') won't produce a number but rather a date truncated to midnight. TO_CHAR(TRUNC(s1.date1,'DD'), 'DD') will produce a number. It's difficult to tell from the code what the poster is trying to do, so I can't tell if that's the answer.
    P.S. I wasn't actually trying to bring this post to the top of the thread -- I was just editing what was clearly an incorrect answer.
    Edited by: matthew_morris on May 9, 2012 8:31 AM

Maybe you are looking for

  • Is There An Easier Way To Do This?

    Hi There. This is a repeat of a previous thread, but I fear that I worded my initial question badly and caused confusion. Here is what's happened: I burned 15 music cd's of 300 songs that I purchased through iTunes using the 'Purchased' list in the l

  • Sql Server 2012:Full Text Catalog taking up all the disk space

    We have created full text catalog and index for a pretty large table. The column on which the index is being created has a datatype of varchar(max). The problem started as soon as we started populating the catalog. It almost grew to 835GB and Populat

  • Dispacher not getting Started...SAP ECC 6.0 on Win2K3 and Oracle.

    HI Experts, I have installed SAP ECC 6.0 (OS-Win2003sp2 DB-Oracle10g). Installation was successful, I have also logged in and created a user in SU01. But when I restart the system I am not able to up the sapserver. SAP MMC - dispacher is not starting

  • Bubble chart with connecting line

    Hi All, Is there a way we can create bubble chart with connecting line.The bubble chart has only two measure, horizontal and size of bubble connected by lines. I am not able to find this kind of chart.Is there a way for similar chart? I am showing th

  • Can the time out for loading a page be extended for busy sites?

    (Error:) Problem loading page The connection has timed out The server at xxx.xxx is taking too long to respond. * The site could be temporarily unavailable or too busy. Try again in a few moments. Question: Is there a way to extend the time out for s