Modelling requirement

Hi friends,
1) I am in BI IP environment. I have created planning function copy using fox and I want this function to be executed as a batch process once the user makes changes in Bex Analyzer.So I want to know how and when the planning function or the planning sequence should be triggered. I want to get this thing done automatically i.e as soon as the planner makes and save changes in the user interface , the copy function should run in the background and do the changes.
2) Is there anyway to copy only those values which have been changed in the user interface rather reading and copying all the records present in my aggregation level/filter. If yes, how should I go about in writing the fox code.
3)Can I read the values of filter variables in fox code and loop through those values? If yes , how?
Note: I am in BI IP not in BW-BPS

Q2) My question was suppose I have 1000 records displayed in the report, the user does manual changes to only few records or infact changes certain key figure values. When I have to update these changes , does the planning function overwrites the change and unchange records values.
I was wondering is there any way If I can update only the change records not all the records as my client have 65 millions records present in the cube.
Q3) Can you please explain me the concept of calling FM which triggers Planning sequence using FOX/exit?

Similar Messages

  • How to decide the no.of models required for our application implemention

    Hi ,
    i ahve one basic quetion in WD development.
    how to decide the no.of models (aRFC ,EJB..etc )  required or needed to get the functionality of our application ?
    for Ex:
    my application consisits of belwo functionalities.
    1. serach for the country
    2.display the bank lis for the above country
    3.display the bank details of the selected bank in the step no.2
    any one of you explain me how to decide the no.of models required for avoe functionalities .
    or is it  in a single model  we can include the all the above functionalities ?
    Regards,
    Govindu

    Hi
    It all depends on application to application and requirements better to consult with functional guys or team lead what exectly they want to implement .All could be possible with RFC or EJB , some alternatives are
    1. serach for the country  :  For this standard java API is there which populate all country (no need any rfc or ejb)
    2.display the bank lis for the above country : (Check if there  any webservices for this or create manually value help for this , even if u will use the RFC abaper guys will do the same )
    3.display the bank details of the selected bank in the step no : step 2 will solve this also with one additional field i.e address and details.
    I hope you agree with the point that stand alone WD app can do these all thing without any model.
    Best Regards
    Satish Kumar

  • Reimport model requires server restart for version nw 04s sp15

    Hi All,
      Does reimport of model requires sever restart in NW 04s sp15, Sometimes it works with out restart of server. .Please give your inputs.
    Thanks.

    Hi,
    The J2EE engine holds a cache of deployed model objects that is optimized by technical system.
    Therefore if there are updates in the metadata of a deployed model object, it is necessary to restart the J2EE engine to flush this cache, and make the new interface metadata available to the Web Dynpro application.
    Its not always required to restart the server.Its only required when there are updates in the model object.
    Thanks n Regards,
    Archana

  • Distribution model required for LS to KU ?

    Hi Guys,
    In BD64 we can create distribution model ONLY with LS to LS.
    So iscenario like LS to KU  system is not allowing to create Distribution Mode because the recepient is KU (not LS).
    So how does the system(MSTER_IDOC_DISTRIBUTE) will determine who is the recepient? Is it from partner profile only???
    If it is yes then distribution model is not required at all for LS to KU scenario??
    Your comments pls.....
    Thanks & regards,
    Biswajit Ganguly.

    I found by myself that ...no dist model req for KU...
    Cheers,
    Biswajit.

  • Custom TestSTand model required

    Hi,
    I'm running a test that run for a long time (around 30 minutes).  The setup is such that there are 4 packs of the same type whereas only one of them is the UUT.  In order to increase the yield, I would like to run a test that tests 4 packs at a time, but it doesn't fit the Batch or the Parallel model..
    The specifications are as follow:
    1) One UUT pair (2 packs) is inserted in one shelf. Another pair is inserted in another shelf. 
    2) Test is run in a "SequentialModel" fashion.
    3) The 2 pairs of UUTs are changing places
    4) Test is repeated again
    5) All 4 SN must be entered in the PreUUT
    6) If all passes, 4 reports will be generated and 4 entries will be logged into the data base.
    7) If a step fails, only one UUT will be logged into to the database.  My program will be able to specify which one of the 4 UUT's is the 'bad' UUT
    Anybody have an idea where to start?
    Anybody have a similar custom  model?
    Thanks
    Rafi

    Hey Rafi,
    I think I'm a little confused here.  So when the first 2 packs are being tested you run tests sequentially on them?  Then you change to the next 2 packs and run sequential tests on those as well?  If so then I would just run sequentially and then depending on pass/fail you can deal with the reporting and writing to the database.  We really don't have any custom models available.  There is a tutorial on how to create them here . 
    Regards,
    jigg
    CTA, CLA
    teststandhelp.com
    ~Will work for kudos and/or BBQ~

  • MDG Replication model required for MDGF with Co-Deployment?

    Hi MDGF experts,
    We are implementing MDGF 6.1 and I am not sure if I have to set up a replication model if MDG should send to the same client(Co-Deploy) and if we need to perform Key mapping if we use the standard OG data model.
    In another document I read that you still have to setup Replication model for MDGF if you post to the same client. http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/6043b83f-1a37-3110-cd8e-f2dabb672019?QuickLink=index&… but this document is for MDG 7.
    If, I do indeed have to setup Replication model do I use Replication via Idoc, services, RFC?
    Your help will be highly appreciated
    Thanks and best regards
    Riaan

    Hi Abdullah,
    Thanks for the document. In the document it describes settings for HUB and ECC client. To me it looks like you have to create 2 profiles for end to end communication for HUB and target. Thus if in my instance I want to replicate in the same client do I still have to setup 1 profile as hub and a second profile for ECC on the same client or do I use the same profile and add all the target and hub services definitions in the same profile or only the target system settings.
    Also what do I do about integration scenario configuration in the MDG hub system and integration scenario configuration in the MDG target system or do I only have to do it for target system seeing in my instance it is the same system.
    Thanks and best regards
    Riaan

  • Does creating a labview dll from simulink model require visual studio

    I've asked the same question at the Simulink forums, but maybe someone here has the answer: 
    I'm trying to perform this process here:
    http://zone.ni.com/devzone/cda/tut/p/id/3447
    However, I get the following errors from the Simulink Real Time Workshop:
    Error building Real-Time Workshop target for block diagram 'SensorCAN_sfcn'. MATLAB error message:
    Error using ==> setup_for_visual>LocIssueMSDevError at 324
    Invalid setting for environment variable MSDevDir or DevEnvDir.
    The setting is: ''
    You can verify the setting by checking for the existence of:
      %DevEnvDir%\..\tools\vsvars32.bat          (for Visual C/C++ 7.1)
      %MSDevDir%\..\..\vc98\bin\vcvars32.bat     (for Visual C/C++ 6.0)
      %DevEnvDir%\..\tools\vsvars32.bat          (for Visual C/C++ 8.0)
    I do not use Visual C, what little programming I do has been done in Borland Builder or with command line gcc.  Is there any way to make simulink look for a different dev environment?  Or does this process require Visual C?
    Solved!
    Go to Solution.

    Thanks, I've got SIT 5.0 installled, but it turns out we have a site license for VS 2008, so I now have that as well.  I've got a new problem though, and although I think it's related to Simulink, these forums seems a little more responsive than that Matlab forums. So....
    I’m working with someone else's s-code, and I haven't used Simulink in the past. As I said in my original post, I am attempting to use SIT to turn the s-code into a dll for use in a LabView RT application. I opened the mdl file in Simulink, and I’m now trying to build the LabView dll in the Simulink Real Time Workshop.  The target file is nidll.tlc and Matlab starts the SIT when launched, so it appears the tools are aware of each other.  However, when I attempt to build the dll, I get the following error:
    fatal error C1083: Cannot open include file: 'rtlibsrc.h': No such file or directory
    I’m using the following: Matlab 2008b, SIT 5.0, and MS Visual Studio 2008, so the libraries and includes should be fairly up to date. What I don't get is that the files that are calling for this include are auto-generated by Simulink, so I don't know why it's not finding the rtlibsrc.h. Still, I found that in the RTW subdirectory, and copied it into my local directory. This gives me a different error:
    ### Linking ...
    C:\PROGRA~1\MATLAB\R2008b\sys\perl\win32\bin\perl C:\PROGRA~1\MATLAB\R2008b\rtw\c\tools\mkvc_lnk.pl SensorCAN_sfcn.lnk SensorCAN_sfcn.obj rtGetInf.obj rtGetNaN.obj rt_logging.obj rt_matrx.obj rt_nonfinite.obj rt_printf.obj rt_sfcn_helper.obj nidll_main.obj rt_sim.obj SensorCAN_sfcn.res SensorCAN0_sf.obj
    link /RELEASE /INCREMENTAL:NO /NOLOGO -entry:_DllMainCRTStartup@12 -dll /NODEFAULTLIB:MSVCRT LIBCMT.LIB kernel32.lib ws2_32.lib mswsock.lib advapi32.lib @SensorCAN_sfcn.lnk /dll -outensorCAN_sfcn.dll
       Creating library SensorCAN_sfcn.lib and object SensorCAN_sfcn.exp
    SensorCAN0_sf.obj : error LNK2019: unresolved external symbol _rt_Lookup referenced in function _mdlOutputs
    Again, self-generated code, now with unresolved external symbols? I'm assuming that I'm linking the wrong version of a dll or obj that contains mdlOutputs, which appears to be something from Matlab. Can anyone point me in the correct direction? Missing headers and incorrect libraries lead me to believe that I've got a search path issue somewhere, but the Matlab/Simulink/LabView/Visual Studio are all fresh default installs.

  • Tired to locate info about lenovo notebook model

    I like to buy lenovo notebook , found lenovo  3000 n 200  in Egypt , but when searching about exactly specification model  i could not get any information (on lenovo site or searching on the internet as a whole ) !!
    is the dealer can change any model specification ?
    Specifications i got it as follows:
    LENOVO 3000 N200
     Intel® Core 2 Duo T5750 2.0 GHZ - 2MB L2 Cache Memory
    4.0 GB Ram Memory 5300 DDR2
    160 GB Hard Disk Drive 7200 rpm
    CD-RW / DVD-RW Multi Burner
    15.4" WXGA TFT 1200 x 800 Vibrant View Screen
    Intel Graphics Media Accelerator X3100 up to 256
    10/100 Ethernet-Integrated / Bluetooth / 56K V.92 Modem
    Integrated Intel Pro/Wireless 3945 a/b/g
    Built-in Audio & Speaker
    5-in-1 Multi-card Reader,4 USB 2.0
    Camera & FPR
    S-Video out / RJ-45 / RJ-11 / IEEE 1394.Headphone / Line out
    External Microphone / Line-In
    External Display (VGA)
    Vista home basic
    if i found  T5750 i did not found 2.0 GHZ
    if i found  2.0 GHZ i did not found T5750
    if i found  2.0 GHZ  with T5750  i found nvidia not Intel Graphics X3100
    http://www5.pc.ibm.com/europe/products.nsf/product​s?openagent&brand=Lenovo3000Notebook&series=Lenovo​...
    Pleas help me to found
    Model Name or Part Number To get sufficient information on that product before  buy
    Thank you

    In fact T5750 means Core 2 Duo 2GHz with 2MB cache and 667 Mhz bus, is better to close search with T5750 if it is the specific processor, if not simply with 2Ghz do the work
    The only way is asking to vendor or another owner of the your lenovo model required, tell them that read the "TYPE" code, starts with 0769-XXX which XXX is a letters and/or numbers that specify your model required
    3000 N200 0769-AL3, Intel C2D 2G T5750, IEL10 mb, Kingston 4G RAM, Samsung 320G HD, Broadcom lan 100, Agere modem 56k, Broadcom wifi g, AuthenTec fingerprint reader, Sonix 1.3M camera, LG DVD burner, Samsung 15,4" 1280x800 BrightView, Intel X3100 IGD 384M, Ricoh 5in1, Ricoh 1394, vista home premium sp2

  • How to use a complex model with levenberd-marquardt vi

    I am trying to use the Levenberg-Marquardt vi. My problem is that I have a model that predicts a complex valued impedance spectrogram. The model requires for input the 3 parameters as well as a collected complex valued data vector also as input to produce a prediction for a complex valued impedance spectrogram vector. I want to use the LM vi to curve fit the 3 parameters against a seperately collected experimental impedance spectrogram. Can this be done or is this a limitation to LabView V6.02 at the present. I have looked all over the website and found no reference to using a complex valued model in conjunction with either of the 2 LM vi's supplied.
    Thanks many,
    jim

    Jim,
    I looked into the Levenberg-Marquardt VI and it does not appear to support complex numbers. The X and Y arrays are doubles, not complex numbers; therefore you will not be able to use the VIs as they are.
    The code is all open source so you could attempt to modify it so it would support the complex numbers. Another idea I had was to try to break the complex number up into either real and imaginary, or r and theta, and feed those arrays into the algorithm, however it only accepts 1-D arrays as inputs, so again you would have to modify the code to handle 2-D arrays as inputs.
    I am not sure if there is any way to do it , but if you could convert the complex data set into a single dimension number and modify the model to handle the change in the data you coul
    d use the VIs as is. Like I said I am not sure if this is a possible solution but it is an idea.
    Happy Coding
    Evan

  • Which model of Mac Pro Tower can take 64BG RAM?

    Hi everyone,
    Been searching and going around in circles to find the true answer to this so I figured I'd just make this post.
    EXACTLY which models of Mac Pro Tower can take 64GB of RAM. Looking on websites like OWC and asking people and I'm getting conflicting information and getting confused.
    THANKS!

    1. Yes.
    2. Yes; Apple doesn’t update its specifications when higher capacity RAM becomes available. A few models require more recent firmware versions and/or OSes installed to reach the actual limit.
    (115360)

  • Alternative approach to tree business model

    Hi JHS Team,
    When generating tree-forms with JHeadstart, we need two VO instances in the model for each node we want to present: one for rendering the Tree and another to show on the form, after the user selects a node.
    I understand that this is necessary for the case when the user clicks on one node which father node isn't the current row of its corresponding View Object, and so in such case the form couldn't find the current row and show the correct data.
    I'm still using the 10.1.2 version, but I checked the 10.1.3 documentation and this model requirement is still there.
    I tried another approach to this issue, which consists of using the very same VO instance for trees and forms: in every level, I render the node with every Iterator used in the previous levels and each corresponding row key for every previous levels. When the user selects a node, in that node link I have every Iterator I need to seek and every primary key I need to search for, so I can really reconstruct the path used by the user to seek that node.
    So, the benefit is that I don't need a root VO in the AM for every node I need to present as a form. In huge trees, this really simplifies the model construction and its maintanance.
    In 10.1.2, the runtime library (JhsDataAction) is already prepared for this, so I would like to know, if possible, the reasons why this is not used already by default (maybe performance?). If there is a reason to mantain the two VO instances in 10.1.3, could it be changed to allow us use the same instances as an alternative to simplify models, and generate the JSF pages working accordingly?
    Thanks a lot!
    Eduardo

    Hi Steven,
    Many thanks for the explanation. I agree with you that the current implementation is easier to understand, but I think it is only from the point of view of a programmer who understand some concepts of ADF business model. When you're a beginner, it's more difficult and less intuitive (or at least it was for me some time ago :-))
    I can say that in our project we have 4 very large trees in the same Application Module (since they need to be present on the same project as they're all related), and in each level we have at least 3 or 4 "details" VOs we present in the form, so the final model is really huge and really difficult to maintain. It was surprisingly easy to change the UIXs to the other approach, however, so I asked myself if this was tried already. Thanks for warning me of the unpredictable problems, I'll see if we have any.
    Eduardo

  • Model & Algorithm for fraudulent  Credit Card transaction

    Hello Everyone,
    I am new to datamining. Currently I am working on an acedemic project related to credit card fraudlent transactions. My objective is to classify a new transaction as normal or fraudlent. AFAIK the classification/scoring will be different for each card right? Like if some customer's past average transaction amount is let's say $100 and if a new transaction occurs from the same card whose transaction amount is let's say $1000 so that could be suspected as fraudlent transaction. So these conditions will vary to card to card or customer to customer. Please correct me if what i think is wrong?
    Can anyone help me which model and algorithm is best for this scenerio? Any help would be greatly appreciated.

    Your problem is defined to be an "anomaly detection" problem. With Oracle Data Mining u will be using One-Class SVM (Support Vector Machine) algorithm.As a starting point you will be feeding all attributes of your past data(training data) into algorithm so it will generate a model for you. Then you will be feeding a new customer data into trained model (apply process) and it will answer you either as anomaly or not. Both transaction amount and card type are your attributes. One is a continuous one, other one is a categorical (nominal) one (I think you should have many others)
    And your model requires a historical characteristics and most probable will demand for a transformation. Assume that you have the following data (I put there in monthly form but in credit card case it might required to be daily):
    Case ID Month Transaction Card_Type
    1 200901 500$ A
    2 200902 1000$ C
    3 200903 2000$ D
    4 200904 10000$ A
    1000 201001 A
    You should transform data from row-to-column format so that ODM can use it
    Card# Xision_200901 Xision_200901 Xision_200901 Xision_200901 Card_Type Top_Consume_Type_in_Last_6_Months
    1 500 1000 1000 2000 A Electronics
    Regards,
    Husnu Sensoy
    http://husnusensoy.wordpress.com

  • SAP - Minium Requirements

    Hello,
    I want to know what Hardware I need to have.
    1) imagine, i want to use standalone installation, of EHP1 ABAP+JAVA
    I know that I need minium 6 gig of RAM ( i checked in installtion guite)
    But where I can find the processor minium? like dual core, 4 core etc etc, i want to know more exact model i want to have.
    There isnt many information about it.
    2) Im thinking to install like
    -. Database server
    - Application Servre ABAP
    - Application Server JAVA
    that's ok, which one must have better quality or minium requirements?
    Like, should I invest more in ABAP server or Database server or Java ? i think Java need more hardware... but, which is the hardware (excluding RAM or 64 bit stuff)
    i checked the service.sap.com/sizing ablalb its very confused..
    any1 can help me??
    Regards

    Hi,
    ( i checked in installtion guite)
    But where I can find the processor minium?
    like dual core, 4 core etc etc, i want to know more exact model i want to have.
    The recommended minimum CPU hardware is at least one dual core CPU or two single core CPUs. The selection of CPU Model requires performance analysis and benchmarking with the other well known Server CPUs available in current market. You can contact different Vendors to discuss more about H/W requirement ,based on your expected Business requirements.
    Please refer the "Requirements for a Central System" section of concerned official SAP Installation Guide. It will tell you all information regarding CPU, HDD Space, RAM , SWAP SPACE, OS Version based on OS Type and Selected DB.
    The Optimized H/W and S/W requirements are determined based on the following Factors:
    1. The Business Workload and User Requests ratio per Hour/per Day with respect to your existing Business Processes/Activities/Requirements.
    2. Permissible Concurrent USER Access ratio/requirement...
    3. The Selection of SAP Solution (e.g SAP ECC, SAP BW, etc) (ABAP, ABAP+JAVA, ABAP)
    The H/W requirements for SAP ECC and SAP BW systems are different.
    4. The selection of Operating System
    5.The selection of Database, Size of Database, etc.
    6. Installation of SAP System in Virtual Environment/Clustered Environment will require different H/W Requirements based on number of VM in use on the Single or Clustered Node.
    etc..etc
    Regards,
    Bhavik G. Shroff

  • Process and Requirement Gathering for a SAP solution implementation

    Hi All,
    I don't know if this forum is right for my question, but can someone guide me on ....?
    Assuming there is a greenfield project starting up, and the scope of implementation is SAP HCM: Is there a SAP best practice for gathering technical and functional requirements and process definition for SAP HCM system (PY, Time, PA, OM, ESS/MSS) for Great Britain country? (process models, requirements templates, questionnaire, identifying requirement gaps from standard SAP process etc.?)
    Is there a methodology recommended on how to go about collecting and baselining these requirements? (workshops, schedule, resources, templates etc.)
    Regards,
    Girish

    Hi All,
    I don't know if this forum is right for my question, but can someone guide me on ....?
    Assuming there is a greenfield project starting up, and the scope of implementation is SAP HCM: Is there a SAP best practice for gathering technical and functional requirements and process definition for SAP HCM system (PY, Time, PA, OM, ESS/MSS) for Great Britain country? (process models, requirements templates, questionnaire, identifying requirement gaps from standard SAP process etc.?)
    Is there a methodology recommended on how to go about collecting and baselining these requirements? (workshops, schedule, resources, templates etc.)
    Regards,
    Girish

  • Feedback on vignetting model

    After going through the exercise of creating lens profiles to corect the vignetting inherent in wide-angle pinhole photos, I thought I'd offer some feedback on the data model.  Not a complaint, just an observation.
    The model requires the vignette to be represented as the coefficients for 2nd, 4th, and 6th degree terms in a polynomial.  This representation offers a very close approximation of real falloff for moderate ƒ/Dmax ratios, but cannot represent the *actual* falloff and the approximation gets increasingly poor as the ratio shrinks.
    Here's a graph of the actual falloff as expressed by the function y = 1/(((d/ƒ)^2)+1) graphed against the representations created by the Lens Profile Creator, which offers a handy tool for fitting the curve (click it to see it bigger):
    The first curve is a hypothetical 6x6cm frame with a "normal" 80mm focal length, as if one converted an old 6x6 film camera to pinhole.  The approximation here is fantastic.
    The second is my 6x12cm pinhole camera with a 40mm focal length.  This is fairly common; both my $480 Zero 612F and the $50 Holga 120WPC use this combination.  The fit here is pretty good, but it's a good thing the frame isn't any wider.
    The last is a relatively common large-format pinhole ratio, with a 25mm focal length on 4x5" film.  The Zero 25B uses this combination..  As you can see, the model is quite poor at representing this curve.
    Since the actual falloff due to distance can be expressed by the simple formula above, it's too bad it can't be represented precisely.  Maybe if someday there's a v2 of this specification...

    That would help, but not nearly as much as a hybrid approach -- allow all photos to get corrected using a mathematically proper model following the 1/(((d/ƒ)^2)+1) formula, *in addition* to a polynomial correction to account for other vignetting factors.
    All photos taken with a flat sensor or film have a falloff curve that matches that formula, though in the "normal" case the effect is quite small.  (Does it get screwed up at large apertures or with funky lens designs like retrofocus lenses?  I'm not enough of an optics guy to figure that out without devoting some serious mental energy, and it's too late at night for that right now.)
    No additional tags would be needed in the profile -- in most cases you already know the focal length and frame size, which is all you need to calculate it.  You could have a boolean tag for whether to perform that correction or not I suppose...

Maybe you are looking for