ME9F generates authorisation High level risks, why?

Hi All!
Please, advise why transaction ME9F - Message Output: Purchase Orders generates High level risks with such transactions as
MK01     Create vendor (Purchasing)
MK02     Change vendor (Purchasing)
XK02     Change vendor (centrally)
MIGO     Goods movement
ME54     Release Purchase Requisition
MI04     Enter Inventory Count with Document
ME28     Release Purchase Order
etc. As I understand this transsaction is used for view and printing of output. Why such risks as "Enter Purch Agreements & create/modify fictious Vendor " or "Ability to create a purchase contract and release PO" come up?
In what conditions such risks can be reasonable?
Thank you

Looking through really old posts of mine I saw that I had given this question a brain-dead answer years ago. I seriously doubt the poster is still looking for a solution (I certainly hope not) :) Almost assuredly the problem is because of a mismatch in data types. TRUNC(s1.date1,'DD') won't produce a number but rather a date truncated to midnight. TO_CHAR(TRUNC(s1.date1,'DD'), 'DD') will produce a number. It's difficult to tell from the code what the poster is trying to do, so I can't tell if that's the answer.
P.S. I wasn't actually trying to bring this post to the top of the thread -- I was just editing what was clearly an incorrect answer.
Edited by: matthew_morris on May 9, 2012 8:31 AM

Similar Messages

  • High Level Question - Why create a tag?

    We have been using a component architecture for about three years that seems to be very similar to the JSP component architecture. We have UI components (such as a listbox, entryfield, table, tree, etc) and an associated renderer. In our JSPs we directly call the renderer though to render the component (the component delegates to it's renderer).
    <%= pagebean.getEntryfield().render() %>
    I still can't see the great benefit of hiding that in a tag, especially considering our developers and page designers are the same people, and they are very good at Java. I wanted to simplify the developer APIs as much as possible (avoiding XML, etc). That's why we've stuck with the above API versus using tags, not to mention debugging Java is much easier. I am really excited about JSF, hoping we can move to a standard API versus our propriatary one. What do you all think?
    Dave

    We have been using a component architecture for about
    three years that seems to be very similar to the JSP
    component architecture. We have UI components (such
    as a listbox, entryfield, table, tree, etc) and an
    associated renderer. In our JSPs we directly call the
    renderer though to render the component (the component
    delegates to it's renderer).
    <%= pagebean.getEntryfield().render() %>
    I still can't see the great benefit of hiding that in
    a tag, especially considering our developers and page
    designers are the same people, and they are very good
    at Java. I wanted to simplify the developer APIs as
    much as possible (avoiding XML, etc). That's why
    we've stuck with the above API versus using tags, not
    to mention debugging Java is much easier. I am really
    excited about JSF, hoping we can move to a standard
    API versus our propriatary one. What do you all
    think?
    DaveInteresting question ... and I hope the answer is equally illuminating.
    JavaServer Faces has many aims, but an important aim relevant to this question is broadening the attractiveness of the Java platform to page authors and others who are not Java developers, and would find the syntax of your scriptlet to be totally opaque and not understandable. Further, what you haven't shown is how you configure the characteristics of your component (probably <jsp:set-property> or scriptlet expressions or something?).
    One of the mechanisms to improve this attractiveness will be to have high quality tools support for JavaServer Faces components -- not just the standard ones, but anyone's third party library. Picture the user who wants to use, say, a Calendar component, and your page author is using a GUI. What the user wants to be able to do is drag a Calendar off a template, drop it into their page, pop open a properties window, and configure all the detailed settings -- never seeing a line of code. The tag class (and the associated metadata in faces-config.xml) are what makes it possible for the tool to know what properties go in the property sheet.
    In your environment, where the page author is also a Java developer, you still get a little benefit (configuring components through tag attributes is still more concise than <jsp:setProperty> or scriptlets). But there are many many many more page authors in the world who don't know Java, and don't want to know Java. JavaServer Faces is after those folks too.
    Craig McClanahan
    PS: JavaServer Faces components can also be accessed at the Java API level, so you can use scriptlets to do so in your pages if you really want to.

  • Why does OWB 9.2 generate UK's on higher levels of a dimension?

    When you specify levels in a dimension, OWB 9.2 generates unique key constraints in the table properties for every level, but only the UK on the lowest level is visible in the configuration properties. Why then are these higher level UK's generated? Is this a half baked attempt to implement the possiblility to generate a snow flake model in OWB?
    Jaap.

    Piotr, Roald and others,
    This is indeed a topic we spend a lot of our time on these past months. We are addressing this as (in my old days I had the same problem as a consultant) we know that this is a common problem.
    So the solution is one that goes in 2 directions:
    - Snowflake support
    - Advanced dimension data loading
    Snowflake is obvious, may not be desired for various reasons but we will start supporting this and loading data for it in mapping.
    If you want a star table, you will know that a completely flattened table with day at the lowest level will not be able to get you a unique entry for month. So what people tend to do is one of the following:
    - Proclaim the first of the month the Month entry point (this stays closest to the star table and simply relies on semantics on both ETL and query side).
    - Create extra day level entries which simbolize the month, so you have a day level with extra entries
    - Create views, extra tables etc to cover the extra data
    - Create a data set within the tables that solves the key problem
    We have opted for the last one. What you need to do for this is a set of records that uniquely identify any record in any level. Then you add a key which links to the dimension at the same point (a dimension key), so all facts always use this surrogate key to link (makes life in query tools easier).
    For a time dimension you will have a set of day records with their months etc in them (the regular star). Then you add a set of records with NULL in the day having months and up. And you go up the hierarchy. For this we will have the ETL logic (in other words you as a designer do not worry about this!). On the query tool you must be a little cautious on counts but this is doable and minor.
    As you can see none of the solutions are completely transparent, but we believe this is one that solves a lot of problems and gives you the best of all worlds. We will also support the same data structure in the OLAP dimensions for the database as well in the relational dimension. NOTE that there are some disclaimers with this as we are doing software here...
    In principal however we will solve your problem.
    Hope this explains some of our plans in this area.
    Jean-Pierre

  • Running a Sub-VI and monitoring data that is generated on a higher level VI

    Hi All, 
    This question must been there before, but I cannot find a suitable answer here on the forums....
    I have a 'top-level' VI that does a lot of things. I also have a sub VI that runs a frequency sweep on a piece of equipment. This is done with a for loop. 
    Problem: 
    I want to monitor/access the data that is generated in the for loop (See attached, the 3 wires within the green circle I want to monitor). 
    2 Questions:
    How can I access the data on the wires (within the loop) from a higher level VI?
    How can I then run this VI in a higher level VI while the higher level VI is continuing and not waiting for the sub-VI to complete?
    I tried using a Que but I cannot seem to get that working. 
    Any suggestions?
    Regards,
    Attachments:
    LV problem.PNG ‏44 KB

    The queue is a good way to move data from a running subVI to another VI.  Your problem is that if the subVI is inside a loop in the main VI, that loop in the main VI cannot iterate until the subVI completes. The solution: have the sub VI running in parallel - not inside - the loop.
    Look at the Producer/Consumer Design Patterns (at File >> New... >> VI >> From Template >> Frameworks >> Design Patterns >> Producer/Consumer.  This may be more than you need at the moment but will show how the parallel code process works.
    Lynn

  • High level description of why an Enterprise Admin account is required for DirSync config

    Hi all,
    I understand that as part of the Azure AD Sync tool config wizard you are required to enter the credentials of an Enterprise Admin account.  These credentials are required for the creation of the MSOL_AD_Sync service account within the Users OU of Active
    Directory.  This account is granted read and synchronization permissions to the local Active Directory.
    Is someone able to provide a high-level description of what this actually means i.e exactly which permissions are granted and on which objects.  Are we talking about having to modify the permissions of every single object within Active Directory?
    Many thanks in advance,
    Graham

    Hi,
    To start with, I guess you would know that this is a Kind of a Temporary.
    I have some details from previous conversations and blogs
    Hope this puts some light on your query.
    When configuring the Microsoft Online Services Directory Synchronization Tool, you are asked to provide the credentials for an account that has
    Enterprise Admin permissions on your organization's local Active Directory directory service. It accepts credentials in either of the following forms:
    [email protected]
    Example\someone
    These Enterprise Administrator credentials are not saved. They are erased from the computer's memory after the service account is created.
    How the Active Directory Credentials Are Used
    The Microsoft Online Services Directory Synchronization Tool Configuration Wizard uses the Enterprise Admin credentials to create the directory
    synchronization service account, MSOL_AD_Sync. This service account is created as a domain account with directory replication permissions on your local Active Directory and with a randomly generated complex password that never expires.
    Note:
    Changing the password associated with the service account is not recommended.
    How the Service Account Is Used
    When the directory synchronization service runs, it uses the service account credentials to read from your local Active Directory and write to the
    synchronization database. The contents of the synchronization database are written to Microsoft Online Services using the Microsoft Online Services credentials requested on the
    Microsoft Online Services Credentialspage of the Microsoft Online Services Directory
    Synchronization Tool Configuration Wizard.
    Note:
    If you add a domain to your Active Directory forest, you must run the Microsoft Online Services Directory Synchronization Tool Configuration Wizard
    again to add the new domain to the list of domains to be synchronized.
    Thanks & Regards
    John Chris

  • Where can I find various high level examples of workflows being used

    I am about to start a project with TCS 3.5 and have been participating in the Adobe webinars to help learn components and specific techniques but what I am lacking is an understanding of various workflows I can model my project after or take bits from various sources. Why start with Framemaker in this workflow versus RoboHelp or even Word? Questions like this I think come from experience with the process and I am thinking that what I am getting myself into is a chessgame with all these pieces and don't want to paint myself into a corner by traveling down one route. I have seen this graphic:
    And this one:
    And this one:
    But they are too generic and do not contain enough information to really understand the descision making process one must go through on various projects.
    Can we have a series of webinars made, all with the underlining theme of defining a working process or workflow, by having guests describe how they have or are using this suite in real life on their own projects? One that might include a graphic showing the routes taken through the suite with reasons why?
    My project hopes to make a single source internal site that will tie together various 3D portable industrial coordinate metrology systems (hardware and software). It would be used as a dispersal site for help, communications between users and SME, OEM information, QA requirements, established processes, scripting snipet downloads, statistics, and training (including SOJT). Portable industrial metrology has 8 different softwares that are used and right now about 8 different instruments. These include laser trackers and radars, articulated arms, scanners, structered white and blue light to name a few. The softwares include Spatial Analyzer, Veriserf, CompIT, eMscon, AXYZ to a few there as well. I want to be able to participate and add content to an internal Sharpoint site, push content to users for stand-alone workstations, ePub, capture knowledge leaving the company through attrition, develop easy graphic rich job aid sheets, and aid in evaluations of emergent software and hardware. I would also like to leave the option open to use the finished product as a rosetta stone like translator between the software packages; doing this is the equivelent of doing this in these other software pacages for example.

    PDF is definately a format I want to include, to collaborate with other divisions and SME for one reason, but also for the ease in including 3D interactive target models with in it and portability. I plan on being able to provide individual PDFs that are very specific in their topics and to also use them to disperse user guides, cheat sheets or job aids... something the user may want to laminate on their own and keep with them for reference, printed out. Discussion in these sheets would be drasticly reduced to only the elements, relying heavely on bullet points or steps, usfull graphs, charts and tables... and of course illustrative images. I am thinking that these should be downloadable buttons to print on each topic section, not in a general apendix or such. They would hopefully be limited to one page, double sided 8x10.
    The cheet sheet would have a simplistic flow chart of how or where this specific topic fits in the bigger picture,
    The basic steps,
    Illustrations, equipment, setup
    Software settings for various situations in a table or chart,
    Typical result graph to judge with,
    Applicable QA, FAA regulation settings or concerns,
    Troubleshooting table,
    Topic SME contact info
    On the back, a screen shot infographic of software process
    The trouble here is that I have read that FM has a problem sometimes in succesfully transfering highly structured or formatted material to RoboHelp. Does this then mean that I would take it from FM straight to PDF?
    Our OEM material is very high level stuff... basicly for engineers and not shop floor users... but that is not to say they don't have some good material that could be useful. Our internal content is spread out across many different divisions and continents, with various ways of saying the same thing. This leads QA to interpret the information differently depending where the systems are put to work. We also have FAA requirements that need to be addressed and reminded to the user.
    Our company is starting to also see an exodus of the most knowledagble of the users through retirement. Capturing the knowledge and soft skill packages they have developed working here for 20-30 years is something I am really struggling with. I have only come up with two ideas so far:
    Internal User Web based Forum
    Interviews (some SMEs do not want to make the effort in transfering knowledge by participating in anything if it requires an effort they don't see of benefit to themseleves), to get video, audio or transcription records

  • Phase out settings at a higher level such as brand or major customer

    Have any of you ever setup phase out assignments at a higher level than product and had it work correctly?  For example, we want to phase out a brand for a major customer.  In other words, a customer is dropping a brand and we don't want statistical forecast generated for that brand customer combination any longer.  I am able to setup the fields in phase out lifecycle settings for product, brand, major account but when i enter the brand and major account I am still getting forecast generated.  It appears to stop for some products within the brand but not all.  Another example is if a customer quits ordering from us I want to setup the major customer to phase out so no forecast is generated.
    If you have done this successfully please let me know.  Or if you would handle these situations in a different manner other than phase out please let me know.  We can do historical adjustments each period but that is a lot of maintenance to do after each period before statistical forecast is generated.
    Thanks
    Steve

    Hi Stephen,
    Life cycle planning works only at the detail level (each CVC), the option of aggregate planning is helpful if you want to phase in or out a certain CVC when you are forecasting at the aggregate level.
    One option is that you should have all products in the "profile assignment for life cycle " section which falls under that brand and customer. you can maintain a file and then automate the upload process in to the "assignment"
    or
    you can try to use the copy functionality in the realignment (/SAPAPO/RLGCOPY) , where you maintain copy factor as NIL and when the stat fcst is generated, you can use this as the next step to zero out -but you would need to maintain them manually.
    or
    easiest and safest wayy would be create a selection for those combination and do not include them in the planning job for stat fcst.
    or
    you can build a customised program to access the PA, PB, Data view and input the selection to zero out the stat fcst KF for that particular selection after the stat fcst run. here you would need to check if the diaggregated values are good enough.
    hope it helps.

  • Item already defined at a high level in the product tree

    Hi experts,
    when I add a BOM into the system, says A is consisted of B and C, I click the button to add this BOM, but system tell me an error "Item already defined at a high level in the product tree.  Row no. 2".
    Row no 2 is item C, I check all my BOM, C is not contained in any BOM, and B is in another BOM but that BOM didn't contain C or A.
    A is in a BOM, but in that BOM, there is no C. It's wreid why C already defined at a high level in the product tree. Thanks...

    Hi
    Please wite following the querry and check
    SELECT T0.[Code], T1.[Code] FROM OITT T0  INNER JOIN ITT1 T1 ON T0.Code = T1.Father WHERE T0.[Code] =[%0]
    Ashish Gupte

  • Errors in the high-level relational engine. The data source view does not contain a definition for the table or view. The Source property may not have been set.

    Hi All,
    I have a cube in which i'm using the TIME DIM that i created in the warehouse. But now i wanted a new measure in the cube which is Average over time and when i wanted to created the new measure i got a message that no time dim was defined, so i created a
    new time dimension in the SSAS using wizard. But when i tried to process the new time dimension i'm getting the follwoing error message
    "Errors in the high-level relational engine. The data source view does not contain a definition for "SSASTIMEDIM" the table or view. The Source property may not have been set."
    Can anyone please tell me why i cannot create a new measure average over the time using my time dimension? Also what am i doing wrong with the SSASTIMEDIM, that i'm getting the error.
    Thanks

    Hi PMunshi,
    According to your description, you get the above error when processing the time dimension. Right?
    In this scenario, since you have updated the DSV, it should have no problem on the table existence. One possibility is that table has been specified for tracking in the notifications for proactive caching, but isn't available any more for some
    reason. Please change the setting in Proactive Caching into "MOLAP".
    Reference:
    How To Implement Proactive Caching in SQL Server Analysis Services SSAS
    If you have any question, please feel free to ask.
    Best Regards,
    Simon Hou
    TechNet Community Support

  • Change Higher Level Item of the Sub-Item in Repair Order

    Hello
    We have a in-house-Repair order created from Compliant.
    This repair order has main Line Item of type 'Repair Request' having Item no 1000.
    From this Line, we are creating new main lines 'Return for Repair' 1001
    and 'Diagonsis' lines 1002 through Actions(appearing as buttons on the Items Assignment Block).
    Both these lines have Higher Level Item as 1000.
    From the Diagonsis 1002 line, we are generating new Debit Memo line 1003(higher as 1002 ) by selecting a new button . This is billable and to be charged to the Customer in ECC.
    How do I change the Parent/Higher Level Item value from 1002 to 1001(Return for Repair) for charging the Customer.
    I found an action used for generating the Debit Line in CRM.
    Please advice on how to achive this functionality.. Is this possible ?
    Cheers
    RJ
    Note: I am new Tech Guy in Service area.
    Edited by: Remo J on Jul 30, 2011 11:00 PM

    Hi Remo,
    Did you implement the in-house repairs completely?
    Did you also work on ECC billing and controlling integrations?
    would really appreciate your response on this.
    Regards,
    Itisha

  • Changing OSX default volume setting to a higher level in sound preferences

    When I change the default volume setting to a higher level in sound preferences within OSX, it is back at the lower level at the next listening session the following time I use iTunes on my Mac Mini. Within System Preferences there is a section that lets you "choose a device for sound output" but the only choice I have is "line-out" and under the "port" heading there is only "built-in output"--no other choices are shown. I have external speakers attached which I like to use for listening to music as I work. What is the problem? Why won't the higher volume I set remain? Also, shouldn't more than one choice be shown for sound output?

    This sounds like the problem that I'm experiencing and <a href-"http://discussions.apple.com/message.jspa?messageID=2694517#2694517">aske d about a couple of days ago. The sound level resets to 50% after shutdown.
    I tried setting the levels in Audio MIDI Set Up as suggested above, but that hasn't changed anything.
    In answer to B_web, I am running my external speakers through the headphone jack.
    So, any other suggestions?

  • QuickTime MPEG-2 Component cant playback High Profile High Level content

    Is the QuickTime MPEG-2 Playback Component really unable to playback HP HL content?
    Im encoding high bitrate media content to mediaservers with my Mac Pro and have used VLC so far to playback the encoded files.
    VLC however doesent loop flawlessly so i decided to try QuickTime MPEG-2 Playback Component.
    To my disappoinment it did not play back the High Profile High Level Mpeg-2 content at all.
    It tried, but no image was visible and only a few digital bleeps could be heard from soundtrack.
    Why call it a playback component if it cant do the job?
    Yours,
    Matti Snellman

    Is the QuickTime MPEG-2 Playback Component really unable to playback HP HL content?
    Im encoding high bitrate media content to mediaservers with my Mac Pro and have used VLC so far to playback the encoded files.
    VLC however doesent loop flawlessly so i decided to try QuickTime MPEG-2 Playback Component.
    To my disappoinment it did not play back the High Profile High Level Mpeg-2 content at all.
    It tried, but no image was visible and only a few digital bleeps could be heard from soundtrack.
    Why call it a playback component if it cant do the job?
    Yours,
    Matti Snellman

  • High Level Thread Implementation Questions

    Hi,
    Before I take the plunge and program my software using threads, I have a few high-level questions.
    I plan on having a simulation class that intantiates software agents, each with different parameters. There is an agent class, with constructor, methods etc. Each agent has a sequence to go through. Once completed, the iteration number is increased and the sequence is repeated. That's simple enough to do.
    The question is, is it worth executing each agent on a different thread?
    If there is around 500 - 1000 lines of code (crude measurement, I know) how many can I expect to thread efficiently?
    One parameter allows an agent to execute n cycles for each global iteration. (i.e. in one iteration, agent A runs once, agent B runs 5 times). Could this be a problem? Should this be controlled outside the agent, or inside it?
    Can I write the code without having to worry about threading, or do I have to design the agent code with threading in mind?
    Will they really run in parallel? It is important that there is no bias to the execution order. I can solve this messily without using threads by randomising the execution order - but that is a messy work around - and why I'm looking at threads.
    Can threaded objects interact easily with non threaded one when execution order is important?
    Are there any other points that I should consider?
    Thanks in advance - any information before I enter this unchartered territory will be truly appreciated!!

    I think you are better off running this all in a single thread.
    Threads make no guarantee as to scheduling. Threads do not increase efficiency (unless your agents block on i/o, or sleep). Threads come with an overhead cost.
    Threads don't guarantee no bias to execution order.
    Threads require synchronization to ensure safe interaction between each other. This is a bit of extra work, and can be a bitch if you're not familiar with it.
    Yes, threads run in parallel. If you have multiple processors then they can truly run in parallel, otherwise they run in time slices.

  • High-level interrupt handler

    Why can I decide to support a high-level interrupt or not? Under what condition does the Solaris kernel will map my hw interrupt (INTA from PCI bus) to a high-level interrupt? When should I refuse to support a high-level interrupt? Why? Can I force my hw interrupt to be a high- level interrupt?
    Also think about that, most hw interrupts indicate something important such as the case buffers are full. If they are assigned below the scheduler's, it really does not make sense.
    Is it possible to block any hw interrupts? Or I'd put it this way can I prioritize hw interrupts in Solaris?
    Thanks
    tyh

    Hi,
    On x86 each IRQ has a software priority assigned to it implicitly by the bus driver, although I think you could override it in the driver.conf. Unlike SPARC, the processor doesn't support a PIL so software priorities are implemented by masking all lower-priority IRQs and re-enabling interrupts.
    High priority interrupts, above dispatcher level, run in the context of the current thread on the cpu, normal level interrupts are handled by interrupt threads.
    The interrupt threads are the highest priority threads on the system, so will preempt any other running threads. In addition mutexes in Solaris use priority inheritance, so the interrupt threads will get to run.
    In general, high level interrupts are allocated to devices with small buffers such as serial or floppy, so that their buffers get serviced in the fastest possible time. Others can afford to wait for just a bit.
    Your driver should check to see if its device has been allocated a high level interrupt. If this is the case, the high level handler should clear the interrupt and save the data/status (in the driver state structure perhaps) and trigger your soft level interrupt handler (which will run as a thread).
    Blocking of interrupts is done for you when you acquire a spin mutex (ie initialised with an iblock cookie). Such a mutex is required to synchronise access to data shared with a high level handler in your driver.
    Please take a look at the Intel Driver writers orientation at:
    http://soldc.sun.com/developer/support/driver/docs/Solaris_driver_models/index.html
    Hope that helps,
    Ralph
    SUN DTS

  • High-level view of steps for 10g OWB-OLAP to Discoverer

    I would greatly appreciate ANY feedback to the following steps. These are not necessarily correct or the best way to do this. I am attempting to take source data, use OWB, create the analytical workspace, and from there have the metadata available and used by Discoverer.
    This is rather high-level, feel free to jump in anywhere.
    We are trying to see if we can get away with NOT using the Analytical workspace manager (AWM) if possible. With that in mind, we are trying to make the most of the process with OWB & OLAP.
    Is this possible to do without ever using the AWM? Can we go end to end (source data--->discoverer final reporting) primarily using OWB to get to the point where we can use the metadata for Discoverer?
    Can anyone relate experiences perhaps that would make me want to consider using the AWM at certain points instead?
    Most importantly, if I do use this methodology, would I be safe after everything has been setup? WOuld I want to consider using AWM at a later point for performance reasons while I am using Discoverer? Or would OWB be helpful as well in some aspects in maintenance of data? Any clue on how often I might need to rebuild, and if so, what to use in that case to minimize time?
    Thanks so much for any insight or opinion on anything I have mentioned!

    Hi Gregory,
    I guess the answer is that this depends. My first question is whether you are looking at a Relational OLAP or Multi Dimensional OLAP solution? This may change the discussion slightly, but just lets look at some thoughts:
    In essence you can use the OWB bridge to generate the AW objects (cubes etc). If you do that (for either ROLAP or MOLAP) you will get the AW objects enabled for querying, using any OLAPI query tool, like BI Beans or the new Discoverer for OLAP. The current OWB release does not run the discoverer enabler (creating views specifically written for EUL support in Disco classic).
    So if you are looking at Disco classic you must use the AWM route...
    The other things that you must be aware off is that the OWB technology is limited to cloning the relational objects for now. This means that you will create a new model based on your existing data. If you want to tweak the objects generated you will probably need to go to the underlying code in either scenario.
    So if you want to create calculated measures for example you could generate a cube with OWB, create a "dummy measure" and add the formula in OLAP DML. The same will go for some other objects you may want to create such as text measures...
    The benefit of creating place holder or dummy measures is that the metadata is completely in order, you simply change the measure's behavior...
    For the future (the beta starts relatively soon) OWB will support much more modeling, like logical cubes and you can then directly deploy to OLAP. Also the mappings are transparent to the storage. So you map to a logical cube and OWB will implement the correct to load either OLAP or relational targets.
    We will also start supporting calculated measures, sparsity definitions, partioning and compression on cubes, as we will support parallel building of cubes.
    Hope this gives you some insight!
    Jean-Pierre

Maybe you are looking for

  • Is it possible to upgrade only 1 memory slot in a Macbook Pro?

    I have a 2010 Macbook Pro 7.1 running Maverick and I want to upgrade my memory from the standard 4bg. Is it possible to add a single 8gb card in one slot and keep a 2gb in the other? I have read that this model can run up to 16gb but I'm not sure I n

  • What should I choose? Canon SX280 HS vs. G1x

    I have to choose between these two models. G1x with a better senzor and more MP, and SX280 HS with the new processor, 20x zoom and GPS. I need the camera for video purpose, shooting video at 1080p, and sometimes also in low light conditions. Which on

  • IPV6 statics and IP SLA

    Hi, I have a test setup working fine in general with IPV4/IPV6. However, I have one situation where I'd like to do what I do in IPV4 for IPV6 The situation is where an IPV4 SLA pings two IP addresses. Then there are two static routes tracking these t

  • Output is not geting correct

    Hi all , Here i have one report. when i input material number and date and plant this time data will display in ALV. I make this report on only movement type  101. But when i input data which movement type is 131, then FIRST TIME report run currently

  • Tux 8.0 No BBL available on site

    when I try to start Tux, these are the errors: Booting all admin and server processes in /opt/impact/sys/demo/cfg/tuxconfig INFO: BEA Tuxedo, Version 8.0, 32-bit, Patch Level 254 INFO: Serial #: 650522264137-888722211569, Expiration NONE, Maxusers 5