Best practice GeoRaster and MapViewer?

Hi,
I want to see rasterfiles with using Oracle GeoRaster and MapViewer. I've bineary rasterfiles and aerial photographs(24 BIT).
Until now I put the data with following parameters into the database:
-     Oracle_inverleaving_type: BSQ
-     Oracle_raster_tile_size_x und -y: 2048
-     Oracle_compression: DEFLATE
-     Oracle_pyramid_max: null
-     Oracle_raster_pyramid_resampling: NN for bineary data and CUBIC for aerial photograph
The bineary rasterfiles could have about 15000x15000 pixels and the aerial photographs about 4000x4000 pixels.
For the MapViewer configuration of a GeoRaster-Theme I use pyramid-level NULL for aerial photographs and 1 for bineary pictures.
The MapViewer BaseMaps has a tilesize of 256x256 pixels and as format png.
Has anybody experience to get the best performance and best quality to show rasterfiles?
Regards,
Sebastian

Hi Jeffrey,
further I have the problem that the MapViewer (Ver1033p5_B081010) doesn't render maptiles for all zoom-levels with my posted settings. Before the MapViewer P5 exists an I use this version, I rendered the maptiles witch MapViewer of version P3.
With the latest version of mapviewer it is only possible to rendere maptiles up to zoom-level 3 and then (level zwo, one or zero) it doesn't render this tiles. The mapviewer shows the following error:
WARNUNG: Failed to fetch tile image.
Message:Failed to fetch tile image.
Mon Feb 17 19:39:19 CET 2009
Severity: 0
Description:
at oracle.lbs.mapcache.cache.TileFetcher.fetchTile(TileFetcher.java:209)
at oracle.lbs.mapcache.cache.Tile.fetchTile(Tile.java:357)
at oracle.lbs.mapcache.cache.Tile.fetchTile(Tile.java:338)
at oracle.lbs.mapcache.cache.MapCache.getTile(MapCache.java:217)
at oracle.lbs.mapcache.MapCacheServer.getMapImageStream(MapCacheServer.java:161)
at oracle.lbs.mapcache.MCSServlet.doPost(MCSServlet.java:254)
at oracle.lbs.mapcache.MCSServlet.doGet(MCSServlet.java:209)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:743)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:856)
at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:711)
at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:368)
at com.evermind.server.http.HttpRequestHandler.doProcessRequest(HttpRequestHandler.java:866)
at com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:448)
at com.evermind.server.http.HttpRequestHandler.serveOneRequest(HttpRequestHandler.java:216)
at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:117)
at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:110)
at oracle.oc4j.network.ServerSocketReadHandler$SafeRunnable.run(ServerSocketReadHandler.java:260)
at oracle.oc4j.network.ServerSocketAcceptHandler.procClientSocket(ServerSocketAcceptHandler.java:239)
at oracle.oc4j.network.ServerSocketAcceptHandler.access$700(ServerSocketAcceptHandler.java:34)
at oracle.oc4j.network.ServerSocketAcceptHandler$AcceptHandlerHorse.run(ServerSocketAcceptHandler.java:880)
at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:303)
at java.lang.Thread.run(Thread.java:595)
17.02.2009 19:39:19 oracle.lbs.mapcache.cache.Tile getOffLineTileImagePath
FEINER: Waiting for blank tile.
17.02.2009 19:39:20 oracle.lbs.mapcache.MCSServlet doPost
SCHWERWIEGEND: LUT has improper length!
Do you know why the MapViewer shows this message?
When I used the MapViewer P3 I didn't have any problems with generating maptiles.
Regards,
Sebastian

Similar Messages

  • SAP SCM and SAP APO: Best practices, tips and recommendations

    Hi,
    I have been gathering useful information about SAP SCM and SAP APO (e.g., advanced supply chain planning, master data and transaction data for advanced planning, demand planning, cross-plant planning, production planning and detailed scheduling, deployment, global available-to-promise (global ATP), CIF (core interface), SAP APO DP planning tools (macros, statistical forecasting, lifecycle planning, data realignment, data upload into the planning area, mass processing u2013 background jobs, process chains, aggregation and disaggregation), and PP/DS heuristics for production planning).
    I am especially interested about best practices, tips and recommendations for using and developing SAP SCM and SAP APO. For example, [CIF Tips and Tricks Version 3.1.1|https://service.sap.com/form/sapnet?_FRAME=CONTAINER&_OBJECT=011000358700006480652001E] and [CIF Tips and Tricks Version 4.0|https://service.sap.com/form/sapnet?_FRAME=CONTAINER&_OBJECT=011000358700000596412005E] contain pretty useful knowledge about CIF.
    If you know any useful best practices, tips and recommendations for using and developing SAP SCM and SAP APO, I would appreciate if you could share those assets with me.
    Thanks in advance of your help.
    Regards,
    Jarmo Tuominen

    Hi Jarmo,
    Apart from what DB has suggested. you should give a good reading on the following.
    -Consulting Notes (use the application component filters in search notes)
    -Collective Notes (similar to the one above)
    -Release Notes
    -Release Restrictions
    -If $$ permit subscribe to www.scmexpertonline.com. Good perspective on concepts around SAP SCM.
    -There are a couple of blogs (e.g. www.apolemia.com) .. but all lack breadth.. some topics in depth.
    -"Articles" section on this site (not all are classified well.. see in ECCops, mfg, SCM, Logistics etc)
    -Serivce.sap.com- check the solution details overview in knowledge exchange tab. There are product presentations and collaterals for every release. Good breadth but no depth.
    -Building Blocks - available for all application areas. This is limited to vanilla configuration of just making a process work and nothing more than that.
    -Get the book "Sales and Operations Planning with SAP APO" by SAP Press. Its got plenty of  easy to follow stuff, good perspective and lots of screen shots to make life easier.
    -help.sap.com the last thing that most refer after all "handy" options (incl. this forum) are exhausted. Nevertheless, this is the superset of all "secondary" documents. But the maze of hyperlinks that start at APO might lead you to something like xml schema.
    Key Tip: Appreciate that SAP SCM is largely driven by connected execution systems (SAP ECC/ERP). So the best place to start with should be a good overview of ERP OPS solution overview, at least at the significant level of depth.). Check this document at sdn wiki "ERP ops architecture overview".
    I have some good collection of documents though many i havent read myself. If you need them let me know.
    Regards,
    Loknath

  • Best Practices Methodologies and Implementation training courses

    Does anybody know regarding Best Practices Methodologies and Implementation training courses available?
    Kind regards,
    Adi
    [email protected]

    hi Adi,
    please do gothrough this pdf
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/519d369b-0401-0010-0186-ff7a2b5d2bc0
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/e5b7bb90-0201-0010-ee89-fc008080b21e
    hope this helps you please don,t forget to give points
    with regards.
    Vinoth

  • Best Practices Methodologies and Implementation

    Does anybody know regarding Best Practices Methodologies and Implementation training courses available?

    Hi dear,
    please don't post the same question several times...
    Look at Best Practice Course
    Bye,
    Roberto

  • Search for ABAP Webdynpro Best practice or/and Evaluation grid

    Hi Gurus,
    Managers or Team Leaders are facing of the development of SAP application on the web. Functional people propose to business people Web applications.  I'm searching for Best practice for Web Dynpro ABAP Development. We use SAP Netweaver 7.0 and an SAP ECC 6.0 SP4.
    We are facing of claims about Webdynpro response time. The business wants to have 3 sec response time and we have 20 or  25 sec.
    I want to communicate to functional people a kind of recommendation document explaining that in certain case the usage of Webdynpro will not be a benefit for the business.
    I know that the transfer of data, the complexity of the screen and also the hardware are one of the keys but I expect some advices from the SDN community.
    Thanks for your answers.
    Rgds,
    Christophe

    Hi,
    25s is a lot. I wouldn't like to use an application with response time that big. Anyway, Thomas Jung has just recently published a series of video blogs about WDA performance tools. It may help you analyzing why your web dynpro application is so slow. Here is the link to the [first part|http://enterprisegeeks.com/blog/2010/03/03/abap-freakshow-u2013-march-3-2010-wda-performance-tools-part-1/]. There is also a [dedicated forum|Web Dynpro ABAP; to WDA here on SDN. I would search there for some tips and tricks.
    Cheers

  • Exchange Best Practices Analyzer and Event 10009 - DCOM

    We have two Exchange 2010 SP3 RU7 servers on Windows 2008 R2
    In general, they seem to function correctly.
    ExBPA (Best Practices Analyzer) results are fine. Just some entries about drivers being more than two years old (vendor has not supplied newer drivers so we use what we have). Anything else has been verified to be something that can "safely be ignored".
    Test-ServiceHealth, Test-ReplicationHealth and other tests indicate no problems.
    However, when I run the ExBPA, it seems like the server on which I run ExBPA attempts to contact the other using DCOM and this fails.
    Some notes:
    1. Windows Firewall is disabled on both.
    2. Pings in both directions are successful.
    3. DTCPing would not even run so I was not able to test with this.
    4. Connectivity works perfectly otherwise. I can see/manage either server from the other using the EMC or EMS. DAG works fine as far as I can see.
    What's the error message?
    Event 10009, DistributedCOM
    "DCOM was unable to communiate with the computer --- opposite Exchange server of the pair of Exchange servers---  using any of the configured protocols."
    This is in the System Log.
    This happens on both servers and only when I run the ExBPA.
    I understand that ExBPA uses DCOM but cannot see what would be blocking communications.
    I can access the opposite server in MS Management Consoles (MMC).
    Note: the error is NOT in the ExBPA results - but rather in the Event Viewer System Log.
    Yes, it is consistent. Have noticed it for some time now.
    Does anyone have any idea what could be causing this? Since normal Exchange operations are not affected, I'm tempted to ignore it, but I have to do my "due diligence" and inquire. 
    Please mark as helpful if you find my contribution useful or as an answer if it does answer your question. That will encourage me - and others - to take time out to help you.

    Hi David,
    I recommend you refer the following article to troubleshoot this event:
    How to troubleshoot DCOM 10009 error logged in system event
    Why this happens:
    Totally speaking, the reason why DCOM 10009 is logged is that: local RPCSS service can’t reach the remote RPCSS service of remote target server. There are many possibilities which can cause this issue.
    Scenario 1:
     The remote target server happens to be offline for a short time, for example, just for maintenance.
    Scenario 2:
    Both servers are online. However, there RPC communication issue exists between these two servers, for example:  server name resolvation failure, port resources for RPC communication exhaustion, firewall configuration.
    Scenario 3:
    Even though the TCP connection to remote server has no any problem, but if the communication of RPC authentication gets problem, we may get the error status code like 0x80070721 which means “A security package specific
    error occurred” during the communication of RPC authentication, DCOM 10009 will also be logged on the client side.
    Scenario 4:
    The target DCOM |COM+ service failed to be activated due to permission issue. Under this kind of situation, DCOM 10027 will be logged on the server side at the same time.
    Event ID 10009 — COM Remote Service Availability
    Resolve
    Ensure that the remote computer is available
    There is a problem accessing the COM Service on a remote computer. To resolve this problem:
    Ensure that the remote computer is online.
    This problem may be the result of a firewall blocking the connection. For security, COM+ network access is not enabled by default. Check the system to determine whether the firewall is blocking the remote connection.
    Other reasons for the problem might be found in the Extended Remote Procedure Call (RPC) Error information that is available in Event Viewer.
    To perform these procedures, you must have membership in Administrators, or you must have been delegated the appropriate authority.
    Ensure that the remote computer is online
    To verify that the remote computer is online and the computers are communicating over the network:
    Open an elevated Command Prompt window. Click Start, point to
    All Programs, click Accessories, right-click
    Command Prompt, and then click Run as administrator. If the
    User Account Control dialog box appears, confirm that the action it displays is what you want, and then click
    Continue.
    At the command prompt, type ping, followed by a space and the remote computer name, and then press ENTER. For example, to check that your server can communicate over the network with a computer named ContosoWS2008, type
    ping ContosoWS2008, and then press ENTER.
    A successful connection results in a set of replies from the other computer and a set of
    ping statistics.
    Check the firewall settings and enable the firewall exception rule
    To check the firewall settings and enable the firewall exception rule:
    Click Start, and then click Run.
    Type wf.msc, and then click OK. If the
    User Account Control dialog box appears, confirm that the action it displays is what you want, and then click
    Continue.
    In the console tree, click Inbound rules.
    In the list of firewall exception rules, look for COM+ Network Access (DCOM In).
    If the firewall exception rule is not enabled, in the details pane click
    Enable rule, and then scroll horizontally to confirm that the protocol is
    TCP and the LocalPort is 135. Close Windows Firewall with Advanced Security.
    Review available Extended RPC Error information for this event in Event Viewer
    To review available Extended RPC Error information for this event in Event Viewer:
    Click Start, and then click Run.
    Type comexp.msc, and then click OK. If the
    User Account Control dialog box appears, confirm that the action it displays is what you want, and then click
    Continue.
    Under Console Root, expand Event Viewer (Local).
    In the details pane, look for your event in the Summary of Administrative Events, and then double-click the event to open it.
    The Extended RPC Error information that is available for this event is located on the
    Details tab. Expand the available items on the Details tab to review all available information. 
    For more information about Extended RPC Error information and how to interpret it, see Obtaining Extended RPC Error Information (http://go.microsoft.com/fwlink/?LinkId=105593).
    Best regards,
    Niko Cheng
    TechNet Community Support

  • Best Practice: DAG and AutoDatabaseMountDial Setting

    Hi,
    I am working with management in my organization with regard to our DAG fail over settings. By default, the fail over setting is set to 'Good Availability' (missing 6 logs or less per
    DB). My organization did not feel comfortable with data loss so we changed it to 'Lossless'.
    Of course, we had a SAN failure and we lost a DAG member and nothing failed over to the surviving DAG member. Even eventID 2092 reported the same log sequence for many databases yet the
    surviving DAG member did not mount. Example:
    Database xxxmdb28\EX-SRV1 won't be mounted because the number of lost logs was greater than the amount specified by the AutoDatabaseMountDial.
    * The log file generated before the switchover or failover was: 311894
    * The log file successfully replicated to this server was: 311894
    Only after the SAN was restored and the surviving server came back did another 2092 EventID get logged stating the log sequence was not 311895 (duh! - because the database got mounted again. )We opened a support
    case with Microsoft and they suggested no databases mounted because the surviving DAG member could not communicate with the non-surviving member (which is crazy to me because isn't that THE POINT of the DAG??). Maybe there is always a log file
    in memory (?) so AutoDatabaseMountDial set to lossless will never automatically mount any database? Who knows.
    In any case, we are trying to talk ourselves back into setting it back to 'Good Availability' setting. Here is where we are at now:
         2 member DAG hosting 3000 mailboxes on about 36 databases (~18 active on each node) in same AD site (different physical buildings) w/witness server
         AutoDatabaseMountDial set to lossless
         Transport dumpster is set to 1.5x maximum message size
    What level of confidence can we expect we will not have data loss with the properly configured transport dumpster and a setting of 'Good Availability?' I am also open to other suggestions such as changing
    our active-active DAG to active-passive by keeping all active copies on one server.
    Also, has anyone experienced any data loss with 'Good Availability' and a properly configured transport dumpster?
    Thanks for the guidance.

    Personally I have not experienced loss in this scenario and have not changed this setting from "Good Availability".  I know that setting the transport dumpster to 1.5x is the recommended setting.  Also there is a Shadow Queue for each transport
    server in your environment which verifies the message reaches the mailbox before clearing the message. 
    To make an example for mailflow (assuming these are multi-role for this example, it still applies to split role). you have Server1 and Server2 with an external user sending mail to a user on Server1.  The message will pass through all your external
    hops, etc. and then get to the transport servers.  If it is delivered to Server1, the message has to be sent to Server2 and then back to Server1 to be delivered to the mailbox so that it hits the shadow queue of Server2.  If the message hit Server2
    first, then it would be sent to Server1 and then to the mailbox.
    If either of your servers are down for a period of time, then the shadow queue will try to resend the messages again and that is why you wouldn't have any data loss.
    Jason Apt, Microsoft Certified Master | Exchange 2010
    My Blog

  • Best practice DataGroup and custom layout

    Hello,
    Some times ago, i made a little TimeLine component in flex 3, the component was very limited in terms of possibility, mainly because of performance issues.
    With flex 4 and virtualisation in layout i thought it was easier to create my component.
    But i'm confronted to some problems.
    I try to extend DataGroup, with a fixed custom Layout, but i have 2 problems :
    -The first one is that  size and position of elements will depends of the dataProvider of the component, so i need to have a direct dependance of the component in my layout.
    -And the main one is depth of the element will depends of data and i don't see a way of updating it in my layout.
    Should i stop using DataGroup and try to implement it directly from UIComponent and IViewPort implementation with a sort of manual virtualisation?
    Should i override the dataProvider setter to sort it the way element depth should be set ?
    Should i use BasicLayout and access directly to property from the DataGroup in itemRenderer to set top left width and height ?
    I'm a little lost and any advice would be a great help.
    Thanks.

    user8410696,
    Sounds like you need a simple resource loading histogram with a threshold limit displayed for each project. I don't know if you are layout saavy but layouts can be customized to fit your exact needs. Sometimes P6 cannot display exactly what you desire but most basic layouts can be tweaked to fit. I believe that you need a combo of resource profiles and columns.
    I'll take some time today and do some tweeking on my end to see if I can modify one of my resource layouts to fit your needs. I'll let you know what I find out.
    talk to you later,

  • Grid monitroing  best practices do and dont

    Hi
    I would like to know your thought and experience: do and donts , specially on following ...
    * monitoring target host / linux
    *using default template,
    what are pitfalls?
    another question about default template is.. i would like to set default template for target type db, but db could be a standby/rac/stand alone ( i have diffrent type of templates for each of those) how can i set default template based on db type? can i create one template that has all metrics regardless whether db is rac/primary /standby and aply to all by default? is this is how it should be done?
    grid v11g
    thanks

    Hi
    But Grasshopper wants to know what is the minimum amount of
    statistics that correctly describes the data. I realize that this
    "depends", but that is not an answer.If you compare the data with the statistics you know if you have good statistics or not.
    Once you have a good set you have to reduce the gathering load to the minimum while keeping the good statistics.
    the Support Analyst was conceding that Oracle "out of the box"
    does not do a very adequate job of gathering the "minimum
    amount of statistics that correctly describes the data"I don't agree. In fact "out of the box" Oracle gathers:
    - to much statistics (too much histograms)
    - it's to "aggressive" with estimate percent
    You end up with a gathering that is too slow and that provides too much information to the CBO (slowing it down as well).
    Usually if you have a problem with the automatically installed job is because you have a large database and your statistics impacts your business.
    But if we can come up with a policy or method whereby we can
    say with certainty that the extra 5-10% is definitely not an issue
    with statistics, or even decrease those errant queries to <1%,
    I would consider our tuning efforts a success.My motto is "Start with a simple solution! If it doesn't work, elaborate it.", i.e. it's simply overkill implementing a solution that is fine in 100% of the cases.
    In other words I advice to use the default gathering, eventually with few parameters changes, in most of the cases. And only when there's a problem you should switch to manual (and this, eventually, for few tables or schemas).
    There is a lot to understand in the CBO. I agree.
    Regard,
    Chris

  • Large heap sizes, GC tuning and best practices

    Hello,
    I’ve read in the best practices document that the recommended heap size (without JVM GC tuning) is 512M. It also indicates that GC tuning, object number/size, and hardware configuration play a significant role in determining what the optimal heap size is. My particular Coherence implementation contains a static data set that is fairly large in size (150-300k per entry). Our hardware platform contains 16G physical RAM available and we want to dedicate at least 1G to the system and 512M for a proxy instance (localstorage=false) which our TCP*Extend clients will use to connect to the cache. This leaves us 14.5G available for our cache instances.
    We’re trying to determine the proper balance of heap size vs num of cache instances and have ended up with the following configuration. 7 cache instances per node running with 2G heap using a high-units value of 1.5G. Our testing has shown that using the Concurrent Mark Sweep GC algorithm warrants no substantial GC pauses and we have also done testing with a heap fragmentation inducer (http://www.azulsystems.com/e2e/docs/Fragger.java) which also shows no significant pauses.
    The reason we opted for a larger heap was to cut down on the cluster communication and context switching overhead as well as the administration challenges that 28 separate JVM processes would create. Although our testing has shown successful results, my concern here is that we’re straying from the best practices recommendations and I’m wondering what others thoughts are about the configuration outlined above.
    Thanks,
    - Allen Bettilyon

    Hello,
    I’ve read in the best practices document that the recommended heap size (without JVM GC tuning) is 512M. It also indicates that GC tuning, object number/size, and hardware configuration play a significant role in determining what the optimal heap size is. My particular Coherence implementation contains a static data set that is fairly large in size (150-300k per entry). Our hardware platform contains 16G physical RAM available and we want to dedicate at least 1G to the system and 512M for a proxy instance (localstorage=false) which our TCP*Extend clients will use to connect to the cache. This leaves us 14.5G available for our cache instances.
    We’re trying to determine the proper balance of heap size vs num of cache instances and have ended up with the following configuration. 7 cache instances per node running with 2G heap using a high-units value of 1.5G. Our testing has shown that using the Concurrent Mark Sweep GC algorithm warrants no substantial GC pauses and we have also done testing with a heap fragmentation inducer (http://www.azulsystems.com/e2e/docs/Fragger.java) which also shows no significant pauses.
    The reason we opted for a larger heap was to cut down on the cluster communication and context switching overhead as well as the administration challenges that 28 separate JVM processes would create. Although our testing has shown successful results, my concern here is that we’re straying from the best practices recommendations and I’m wondering what others thoughts are about the configuration outlined above.
    Thanks,
    - Allen Bettilyon

  • BI BPC server strategy and the best practice

    Hi  Everyone
    I would like to know couple of questions:
    1) Is any white paper or documentation on pros and cons of having BPC NW installed as a add-on to BW system wherein Planning and Reporting are taking place on the same BW system versus BPC as a separate system wherein is used primarily for Planning and Consolidation system only
    2) is there a Best Practice document and performance considerations for BPC development from SAP.
    appreciated for any answers.
    Regards
    AK

    Hi AK,
    both scenarios works well but for first scenario having BPC on top of exisiting BW reporting, you need to take special attention on sizing. As BPC requires additional capacity, you need to take care of it.
    Also if you have SEM component on your BW system, you need to check this Sap note: 1326576 u2013 SAP NW system with SAP ERP software components.
    And before you install BPC, it is recommended to have a quick test process once you upgraded to NW EhP1 (it is a prerequisites for BPC), to check the existing BW reporting process.
    regards,
    Sreedhar

  • EFashion sample Universes and best practices?

    Hi experts,
    Do you all think that the eFashion sample Universe was developed based on the best practices of Universe design? Below is one of my questions/problems:
    Universe is designed to hide technical details and answer all valid business questions (queries/reports). For non-sense questions, it will show 'incompatible' etc. In the eFashion sample, I tried to compose a query to answer "for a period of time, e.g. from 2008.5 to 2008.9, in each week for each product (article), it's MSRP (sales price) and sold_price and margin and quantity_sold and promotion_flag". I grabed the Product.SKUnumber, week from Time period, Unit Price MSRP from Product, Sold at (unit price) from Product, Promotions.promotion, Margin and Quantity sold from Measures into the Query Panel. It gives me 'incompatible' error message when I try to run it. I think the whole sample (from database data model to Universe schema structure/joins...) is flawed. In the Product_promotion_facts table, it seems that if a promotion lasts for more than one week, the weekid will be the starting week and duration will indicate how long it lasts. In this design, to answer "what promotions run in what weeks" will not be easy because you need to join the Product_promotion_facts with Time dimention using "time.weekid between p_prom.weekid and p_prom.weekid+duration" assuming weekid is in sequence, instead of simple "time.weekid=p_prom.weekid".  The weekid joins between Shop_fact and product_promotion_facts and Calendar_year_lookup are very confusing because one is about "the week the sales happened" and the other "the week the promotion started". No tools can smart enough to resolve this ambitious automatically. Then the shortcut join between shop_facts and product_promotion_facts. it's based on the articleid alone. obviously the two have to be joined on both article and time (using between/and, not the simple weekid=weekid in this design), otherwise the join doesn't make sense (a sale of one article on one day joins to all the promotions to this article of all time?).
    What do you think?
    thanks.
    Edward

    You seem to have the idea that finding out whether a project uses "best practices" is the same as finding out whether a car is blue. Or perhaps you think there is a standards board somewhere which reviews projects for the use of "best practices".
    Well, it isn't like that. The most cynical viewpoint is that "best practices" is simply an advertising slogan used by IT consultants to make them appear competent to their prospective clients. But basically it's a value judgement. For example using Hibernate may be a good thing to do in many projects, but there are projects where it would not be a good thing to do. So you can't just say that using Hibernate is a "best practice".
    However it's always a good idea to keep your source code in a repository (CVS, Subversion, git, etc.) so I think most people would call that a "best practice". And you could talk about software development techniques, but "best practice" for a team of three is very different from "best practice" for a team of 250.
    So you aren't going to get a one-paragraph description of what features you should stick in your project to qualify as "best practices". And you aren't going to get a checklist off the web whereby you can rate yourself for "best practices" either. Or if you do, you'll find that the "best practice" involves buying something from the people who provided the checklist.

  • APO - Aggregate Forecasting and Best Practices

    I am interested in options to save time and improve accuracy of forecasts in APO Demand Planning.   Can anyone help me by recommending best practices, processes and design that they have seen work well?
    We currently forecast at the Product level (detailed).  We are considering changing that to the Product Family level.   If you have done this, please reply.

    Hello Dan -
    Doing it at the Product level is very detailed (but it depends on the number of SKU's Available and that are to be forecasted).
    Here for me on my project we have about sample size 5000 finished goods to start with and forecasting at that minute level wont help. For me i have classified a product group level where i have linked all the similar FGs. tht way when you are working with the product group you are able to be a little from assertive in your forecasting. After that you can use proportional factors that will help you in allocating the necessary forecast down to product level. This way you are happy as well as the management is happy (high level report) as they dont have to go thru all the data that is not even necessary for them to see.
    Hope this helps.
    Regards,
    Suresh Garg

  • Best Practice of DMS in Auto-Industry & benchmarks

    Hi Experts,
    what is the best practice in DMS for auto-induastry?
    Also we need centralized server and cache servers at plant level. what will be the best practice? and archiving solutions?
    what is benchmark for defining content and cache server size?
    Regards,
    Ravindra

    answered outside sdn

  • Need advise for best practice when using Toplink with external transaction

    Hello;
    Our project is trying to switch from Toplink control transaction to using External transaction so we can make database operation and JMS operation within a single transaction.
    Some of our team try out the Toplink support for external transaction and come up with the following initial recommendation.
    Since we are not familar with using external transaction, I would like member of this forum and experts, to help comment on whether these recommendation are indeed valid or in line with the best practice. And for folks that have done this in their project, what did you do ?
    Any help will be most appreciated.
    Data Access Objects must be enhanced to support reading from a TOPLink unit of work when using an external transaction controller. Developers must consider what impact a global transaction will have on the methods in their data access objects (DAOs).
    The following findSomeObject method is representative of a “finder” in the current implementation of our DAOs. It is not especially designed to execute in the context of a global transaction, nor read from a unit of work.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    ClientSession clientSession = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    obj = (SomeObject)clientSession.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    clientSession.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    However, after making the following changes (in blue) the findSomeObject method will now read from a unit of work while executing in the context of a global transaction.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    Session session = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    if (TransactionController.getInstance().useExternalTransactionControl())
         session = session.getActiveUnitOfWork();
         readObjectQuery.conformResultsInUnitOfWork(); }
    obj = (SomeObject)session.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    if (TransactionController.getInstance().notUseExternalTransactionControl())
         session.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    When getting the TOPLink client session and reading from the unit of work in the context of a global transaction, new objects need to be cached.
    public getUnitOfWork(ILoginUser aUser)
    throws DataAccessException
         ClientSession clientSession = getClientSession(aUser);
         UnitOfWork uow = null;
         if (TransactionController.getInstance().useExternalTransactionControl())
              uow = clientSession.getActiveUnitOfWork();
              uow.setShouldNewObjectsBeCached(true);     }
         else
              uow = clientSession.acquireUnitOfWork();
         return uow;
    }

    As it generally is with this sort of question there is no exact answer.
    The only required update when working with an External Transaction is that getActiveUnitOfWork() is called instead of acquireUnitOfWork() other than that the semantics of the calls and when you use a UnitOfWork is still dependant on the requirements of your application. For instance I noticed that originally the findSomeObject method did not perform a transactional read (no UnitOfWork). Has the requirements for this method changed? If they have not then there is still no need to perform a transactional read, and the method would not need to change.
    As for the requirement that new object be cached this is only required if you are not conforming the transactional queries and adds a slight performance boost for find by primary key queries. In order to use this however, objects must be assigned primary keys by the application before they are registered in the UnitOfWork.
    --Gordon                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

Maybe you are looking for

  • OSX 10.8 Install issues.

    I don't understand why this is so difficult for me. I'm having a problem upgrading to Mountain Lion from Lion on a MB Air. I've done almost everything except wipe, and renew, which i cannot do, because I don't have a Lion install disk. Here's what I'

  • How can I create a group Calendar in iCal, Mountain Lion

    How can I create a group Calendar in iCal, Mountain Lion?

  • Slow moving item report-BW

    Hi guys,             Regards.We got a requirement: 1).The user inputs the "No. of Days since consumed" and the Customer exit code would calculate the "Result Variable from Current Date" and populate "SLOWMOVDATE". 2).We want the Stockvalue against th

  • HT1498 I forgot my pass code for my Apple TV ?  Help please

    I'm stuck I need my passcode for my AppleTV  Which I forgot to do I do now?

  • Bookmarks Internet Explorer 8

    Hi folks, many users of our BI 7.0 web reports are using IE 8. Everthing seems to be working fine except setting bookmarks. Bookmarks are stored in the system, but there's no way accessing them easily. Before, the user was prompted to store the bookm