Clarifications reg B1iSN2007 and B1 2007B

Hi All,
  I am new to B1, as we installed B1iSN2007 and B1 2007 B . I want to know the following
1.  After installation what are the testing needs to be done by a developer?
2. How to Connect B1 and XI, R/3, BW using B1i?
3. Step by Step guide for configuration and developments in B1 and B1i.
I searched in forum but not able to get a document with clear idea.
I hope some one can help me in the above issues.
Regards,
Prakash

Prakash,
Did your B1i platform is built-in installed during B1 installation? If so,
1.you need to uninstall SAP Business One Integration Platform from control center.
2.And download B1i Platfrom and B1iSN throug the following link. After you download, you will find the installation guide in the CD folder and how to config B1iSN step by step in the folder <<ContentDocumentationEN>>
1. If you are an SAP Business One Partner go to the Software Distribution Center on the Channel Partner Portal Choose: SAP Business One Products -> Installations -> SAP Business One 2007 -> SAP B1 2007 Integration for NW  -> Installation -> Downloads
2. If you are an SAP Customer go to the [Software Distribution Center on the SAP Service Market Place|https://websmp108.sap-ag.de/swdc]
Choose: Download -> Installations and Upgrades -> Entry by Application Group -> SAP Business One -> SAP Business One 2007 -> SAP B1 2007 Integration for NW -> Installation -> Downloads
Kind Regards, Yatsea

Similar Messages

  • Need Clarification on sga_target and sga_max_size

    HI,
    I need some clarification in SGA_TARGET and SGA_MAX_SIZE.
    I have the parameter like below.
    SGA_MAX_SIZE=10G
    SGA_TARGET=9G
    And I spread the 9G to all components like (DB_CACHE,SHARE_POOL etc.,).
    My doubt, Incase db need the memory more than 9GB, Whether it automatically take the extra 1G from sga_max_size
    or we have to change the sga_target to 10G.

    Unless and untill, we set the sga_taget=10G, The extra 1G(from sga_max_size) is not used.
    Am i correct?No - its wrong. Any change in the value of SGA_TARGET affects only the sizes of the auto-tuned components. If you increases its value then increased memory will be distributed in only among the components controlled. So, yes 1GB will be used, because have sga_max_size=10G. If you decrease the value then reduced memory is taken back by the auto-tuning policy from one or more of the auto-tuned components.
    If SGA_MAX_SIZE is greater than SGA_TARGET, you can increase SGA_TARGET without restarting the instance. Otherwise, you'd need to shutdown and restart the instance if you wanted to increase SGA_TARGET.
    Regards
    Girish Sharma

  • Domain Guideance and Clarification using SVN and an Export suggestion

    Hello Oracle SQL Data Modeler Support,
    Apologies if this has been documented somehwere and I have missed reading it, but have gone through the User Guide and cannot find the clarification I want regarding domains.
    1) WHAT IS BEST PRACTICE TO SAVE WHEN USING SVN
    From the forum I have picked up that the domains file is in the following directory:
    ~\datamodeler\datamodeler\types
    File name is 'defaultdomains.xml'
    When I come to save the file using SVN I get 'Choose versioned folder for storing system types'
    I assume this is where the domains file is stored.
    I require the Domains to be avialable centrally to all Designs I create, what should I do?
    a) Set the folder to ~\datamodeler\datamodeler\types
    b) Create a design called 'Domains' and store it in this folder
    c) Any thing you may suggest
    2) EXPORT OF DOMAIN FILE SUGGESTION
    This should be a quick win for you, can you please add an Export Domains function, seems this needs to do no more than make a copy of the defaultdomains.xml file and create it in a specified export directory.
    Will avoid having to go through the forum to pick up that the defaultdomains.xml file needs to be copied and transfered over for new SQL Data Modeler installations.

    Hello,
    I require the Domains to be avialable centrally to all Designs I create, what should I do?Default location is fine if SVN is not used and if all designs are used only on that computer.
    If versioning is used then it's better to have separate directory for domains and this directory shouldn't be part of any design's directory - i.e. for designs you can have directories c:\des_1, c:\des_2 ...c:\des_n - one directory per each design and that directory will contain design DMD file and design folder. For domains you can have directory c:\DM_Sys_types and you need to set this directory in "Tools>Preferences>Data Modeler>system types directory" - logical types, RDBMS sites and scripts also will be stored there.
    Philip

  • Reg: Message and BPM

    Hi
    I'm having a scenario where i use a JDBC adapter to extract data from a DB. As a result of the query say, 10 rows are returned as message to XI server. I have a transformation (BPM) set and the receiver(target system) is a file. When i open the file to see its contents i can see only the first record been transfered. The mapping part used for the transformation node is having IF condition. But all the fetched records satisfies the conditions in the mapping.
    Pls let me know the corrective step.
    reg: Prabhu

    Hi,
    1)Check the input XML i.e all 10 records are coming into XI . This you can check in SXMB_MONI
    2) Then you can test the mapping in the integration Reposiotry .. So now you can get if any mapping problem. For this , check is the occurence of target strcuture is 1..n or 0.n
    /people/michal.krawczyk2/blog/2005/09/16/xi-how-to-test-your-mapping-in-real-life-scenarios
    3) If mapping is correct , then check the RWB->Message Monitoring->Message Display Tool and check the payload.
    4) If this is correct, then check the File COntent Conversion of the Receiver File Adapter.
    Hope this helps,
    Regards,
    Moorthy

  • Clarification about plant and terms of payment In Master Data:

    Hello Gurus,
    I have a doubt as follows:
    1) In Material master (MM01) we are maintaining Plant as two types.
         a) Plant at organisation leve pop up at the begining
         b) Delivering plant at Sales organisation 1.
    So, is there any difference between plant and delivering plant or are they different objects ?
    2) Terms of payment in Customer master (XD01)
         a) we maintain Terms of payment customer master at company code level in "Payment transaction"
         b) we also maintain Terms of payment in customer master at sales area data in "Billing Documents" tab
    Now Why do we need to maintain at these to levels.
    FYI: I have also tried to maintain two different Terms of payment and without any hesitation the system accepts, why ?
    What is the significance of it.
    Please clarify the above.
    Thanks,
    Venky.

    Hello Venky,
    1. Material / Plant
    A Material is always stored in a Plant & there would be various parameters to be entered for that particular Plant. E.g. Storage Bin, Picking Area, Negative Stocks allowed in Pant, GR Processing time, etc..
    Now the same Material may or may not be sold from the same plant, or even if it is sold from the same plant, there would be different Sales parameters for each combination Sales Organisation & Plant. There Sales Organisation specific parameters are entered in Sales Organisation/PLant view. Tax classification Data, Cash Discount indicator, Sales Unit, Delivering Plant, Division, Minimum Order & Minimum Delivery quantity.
    2. Payment Terms
    The Payment Terms entered in Billing tab in Sales Area data is copied into Sales order & Invoice.
    The Payment Terms in Company Code data are used by FI department when posting direct payment (without reference to Sales Document). e.g. to Offer Cash Discount for paying in advance.
    Hope this clarifies,
    Thanks,
    Jignesh Mehta

  • Clarification needed - Intune and SCCM side by side

    Hi Forum
    I need some clarification on how the Intune and SCCM client will react when on the same workstation. non-integrated.
    Will it refuse install? I know its not ideal, I just need to know.
    Say I managed Endpoint in Intune and Updates in SCCM. Is this even possible?
    Thanks in advance
    NN

    It shouldn't be used like that, either use the hybrid configuration, of Intune integrated with ConfigMgr, or use them stand-alone.
    Also, just for testing purpose, I just tried to install the Intune agent on a machine with the ConfigMgr client installed and the installation failed with an error message stating that the ConfigMgr client should be uninstalled first.
    My Blog: http://www.petervanderwoude.nl/
    Follow me on twitter: pvanderwoude

  • Need Clarification On Unicode and Upgrade-ECC6.0

    Dear All,
    I need some clarification unicode and upgrade . It would be great help if you give your time .
    We had 2 code pages - 1100,1401 in 4.6B system. We had languages - FR,EN,ES,PT and PL. The system has been upgraded to ECC6.0 non-unicode now.
    Now in I18N->System configuration (RSCPINST), only EN is listed. SPUMG asked for activation of I18N to proceed. When the I18N activation was done,  it has knocked out the code page 1401 from TCPDB table.
    Is this normal?
    But, the code pages 1401 is shown as consistent in SCP transaction.
    The system setting has changed to Single code page. Will this affect unicode migration? How did the additonal code page 1401 which was in 4.6B get knocked out
    now? How did the languages ES,FR,PT, IT and PL which were in 4.6B get knocked out of RCPINST?
    We are manually filling the vocabulary since SPUMG is not showing Scanning tabs. The language key in vocabulary is not completely set. The reprocess logs are not completely green. Will this allow unicode migration now? Can we start unicode migration even with this status.
    Regards,
    Santosh
    Edited by: santosh suryavanshi on Nov 18, 2010 11:11 PM
    Edited by: santosh suryavanshi on Nov 18, 2010 11:11 PM

    Hi Santosh,
    SAP ECC 6.0 is not supported with MDMP. This is the reason for the behaviour in RSCPINST.
    The standard way for an upgrade based on start release 4.6B with MDMP would be TU&UC (see SAP note 959698).
    Do you follow this procedure ?
    Best regards,
    Nils Buerckel
    SAP AG

  • Wanted to get some clarification on JavaCompiler and Reflection

    Hello,
    I am working on building a modular, dynamic framework for web apps. The idea is highly reflection driven controllers and DAO's (no real news here, everyone does this, I was just giving some background).
    One of the pieces I wanted to build was a dynamic search criteria object to pass to the DAO, used to filter collections coming back from the persistence layer. I wanted these to be runtime compiled so they would change with any changes to their corresponding data transfer object(I have written many of these, they are repetitive, and that means that they are a good candidate for a program to write for me). I settled on this because I needed to be able to fill the objects with data, and it seemed like the best way to me(I think an enum could also have worked, but I have not used these much and runtime compilation just sounds so neat).
    So I have the service built to dole out the dynamic search criteria objects, and I have them compiling with all dependencies satisfied, and that all looks good.
    Then I hit the hitch, a runtime class - MyDynamicSearchCriteriaObject is used in the following manner MyDynamicSearchCriteriaObject.class.getMethods(), causing a serious failure(one that is not caught by Exception), and the program just fails.
    So from what I have seen, this is due to the fact that reflection information on the classpath is stored at jvm startup in memory, and any runtime loaded class cannot be reflected on with the standard API (it looks like javassist is built just for this, but I have not fiddled with it much yet). I just wanted to get any thoughts anyone has on this, is my analysis way off base, is this how it works, is there some way to trigger a runtime refresh of the reflection structure (I doubt this would ever work due to the different classloaders).
    My other question is this, if you cannot reflect on runtime classloaded classes, then they will only ever really be of value if they implement a compile-time-known interface. This would allow for dynamic implementations of a given interface via runtime compilation, but not use of the runtime loaded class by it's own type definition(via reflection).
    I am a little new to all of this, this is my first stab at building such a framework, so any thoughts, help, or clarification are greatly appreciated!
    Thanks,
    Scott

    Scott_Taylor wrote:
    So I really have one core question here, do you really want to understand, or do you just want to dazzle me with your credentials (which I am not impressed with at all, I find people who flaunt their credentials generally do so because it is all that they have)?
    You were obviously attempting to 'educate' me as to the benefits of code generation. That has nothing to do with your problem nor does it have anything to do with me, since obviously I am quite familar with why one would actually use code generation.
    I will answer though, one last time.
    And I still have no idea what you are talking about.I must be crazy or something, since there is probably nothing in the programming world that you could not understand, right? Generally, when you don't understand you probe with questions so you can understand. Unless you are looking to be dismissive, in which case, why even answer?
    First there are many things that I don't understand. And which I freely admit. For example GUIs. And embedded programming. And business domains such as natural gas exploration.
    However we are not discussing one of those topics. But rather we are discussing a code generation and, apparently, SQL/data dynamic expression creation. And those are in fact topics that I know a great deal.
    Any compilation of that is only going to provide a minimal gain when compared to the database hit itself.I really don't understand this, and it leads me to believe we are not understanding each other. I am not compiling sql, consider this:
    Collection<C> list = myNeatDAO.search(Object someDTOSearchCriteria);This searchCriteria object is what I am trying to dynamically compile, again, not for performance in any way, rather to avoid writing code that is 100% based on another class (a class which may change over time). The formation of the query (whether sql or hql or whatever) is handled via reflection on the search criteria and a parameterized class in the DAO subclass's definition:
    class MyNeatDAO<SomeDTO> extends AbstractDAOImplementation{}So we compare the classes and form some kind of query in the DAO. This is pretty standard stuff. I think where the wires are crossed is that you are 1)not reading attentively (if you had you would know that I marked this complete many posts ago) and 2) are not understanding the goal of the code.
    You got part of it correct in that someone isn't reading carefully and that someone doesn't understand.
    Myself I understand your problem several posts ago.
    What you failed to understand that you are attempting to create a system that I have already done so. Several times. It never required runtime compilation.
    Perhaps I wasn't clear before. Hopefully this will make it clearer.
    1. Given a system of DTOs that exist as compiled units (they could even be created at runtime.)
    2. A solution can be built which allows for full expression creation for queries based on those DTOs.
    3. That solution does NOT require runtime compilation.
    4. That solution does NOT require changes for future DTOs changes (which would seem obvious given that it can support DTOs compiled at runtime.)
    >
    Code generation is dependent on code patterns. That is true whether
    one shot or ongoing. Nothing else.
    Runtime functionality on the other hand is driven by user requirements.
    Code generation is a tool to achieve that and nothing else.I have two thoughts here.
    First, if you have a clear, unwavering requirement(or pattern) based purely on some other single class, how does it make sense not to automate that? Why would I want to write what is basically a near copy of the original dto each time? Not only does this increase development time (basically paying a developer to do a job a monkey could do), but it also increases modification time later when the true requirements are either discovered or the original communicated requirements change. If this single class is runtime compiled based on the dto, if you change the dto, the change to the corresponding search criteria class is automatic and perfect. There is no debug time, it just works since you are not writing it and introducing errors. The drawback is that you lose control over that class, which in this case does not create a major problem.
    Again you failed to read "attentively". Why do you think that I myself would have been using code generation since before java existed?
    Did you not understand that I have in fact been doing it? Or did you not understand that "before java" means that I have been doing it for a long time?
    I didn't bother mentioning it before but I also worked on a system which used runtime compilation extensively. Matter of fact one of the expression engines that I created was used in that system. However there was no need for that engine to be compiled at runtime.
    My second thought is that code generation is a tool to meet customer requirements, just like anything else in the software world. There are internal and external customers, and their needs must be balanced. The tool is in the box, and I am going to use it as I see fit. If you think this is wrong or off base, please express this in a useful way, I have laid my cards on the table pretty clearly (I think), if you see something glaring, explain what is wrong, don't just spew dismissive one-liners. That only makes me believe that you can't explain what is wrong(I have seen this a lot with programmers, they don't answer in a clear manner because they can't), and that you just want to be condescending, which I find a little sad. Consequently, this leads me to believe that you probably can't back up anything you are saying, and I may miss a very valid point since you never expressed it in any clear manner.
    You have written customer requirements that state explicitly that the system must have a SQL/data query system in place that is compiled at runtime?
    Unusual requirements. Most requirements would be along the lines of "call center employee must be able to enter customer last name or phone number or both"

  • Reg support and maintenence

    hi frndz,
    my query is on support and maintence.
    actually i worked on implementaion till now so dont have an idea on support
    can any one explain me wat exactly we do here and can u provide any reference for this.
                      thanking you

    Hi,
    1)Generally in Support projects work comes in the form of tickets. each problem will be referred by a Ticket. All the tickets information will be maintained either in Remedy database or Lotusnotes etc. Here information like who raised this ticket, who is the resolver, date of creation, ticket's priority whether it is high or medium or low, specification etc. entire information will be maintained here. once the object is completed then respective ticket will be updated with status 'completed'.
    2)It varies on company to compnay which tool they use for Tickets flow some of the tools are Clarify (Lotus Nnotes), Dimnesion(Remedy)
    here we get the tickets n we chose the ticket n assing in our name nstart the work mostly it will be some modification in some codes or stuffslike that.. u wil get the requirenment form the tool itself it wil display how many developers has alrady has open tht ticket or worked on that ...so that u wil get an idea abt the current status...
    3) Generally there r SLA's (Service level agrrement) applicable to support projects.
    like - if ticket is of high priority then it must be solved in 3 dyas, if medium then 5 days, if emergency then in 2 hours..etc
    it depends on the company and customer agreement.
    hope it clarifies all
    amit

  • Clarification about source and destination IPs for internal clients and Edge server

    I just wanted to get some clarification on the correct traffic flow between internal Lync clients and the Edge server.
    From all the diagrams I've looked at I was under the impression that if internal clients need to hit the Edge server to talk to external clients they should always do so through the Edge Internal interface which bridges to the Edge External interface and
    out to the internet.  Specifically port 3478 from the Edge AV External interface to the internal clients.
    We aren't seeing that in our environment.  When internal clients are talking to external clients we see the Edge AV External interface communicating directly with the internal client.  In fact we found this out because after the migration to Lync
    2013 external users couldn't created a AV connection to internal users on either the Lync servers.  We saw traffic on 3478 being dropped between the Edge AV External interface and the internal client.  Once we opened that port AV traffic worked.
    We never put this rule in until we introduced Lync 2013.  Lync 2010 didn't seem to require it.
    Is that the correct flow?

    I would also really love to know the outcome of this but it looks like the thread is marked as "Answered" and it is not so. 
    I've been working with a troublesome Lync deployment in which internal users are having issues sharing their desktop with external and federated users. After opening up all the 50000-59999 range for TCP/UDP on the A/V Edge external interface things are working
    much better, but we still see sporadic failures.
    It lead us to start digging into the network traffic. We see that UDP traffic on port 3478 is being routed back from the external client to the Edge A/V's external interface, inside of the DMZ's perimeter, then directly to the internal client on the internal
    network. It doesn't look like it's making a connection since the stream is so small, so I wonder if there is a design flaw in my topology?
    There are persistent static routes on the Edge server that use the internal interface to route internally directed traffic over the internal gateway. Tracert confirms the flow, but in wireshark traces, running during successful connections, UDP port 3478
    is still sending packets directly to the internal IP from Edge's A/V address. 
    We also see successfully connected sessions communicate on a different network route that we use to handle internet traffic rather than our Lync topology's route (the one defined for A/V traffic). The connection opens on ports in the 50000 range, but goes
    over a router that we have not configured for such traffic. Is that possible?
    Why is UDP traffic on 3478 trying to go directly to internal clients from external interface ?
    It sounds like it's happening elsewhere... Is this a legitimate issue to be diagnosing? Has it been observed and/or resolved by others?

  • Clarification reg Domain Controller IP address change

    Hi,
    We are running on Windows MSCS Cluster (Primary / Secondary) for ECC Server.
    The MSCS Cluster environment of ECC Server is a member of a Domain Controller.
    We are planning to replace the Domain Controller, nevertheless all the settings / user names will remain the same.
    The only thing will be changed is the IP Address.
    For changing the IP address, we are planning to change the DNS and WINS IP address of the ECC Server accordingly.
    Hence needs clarification on the following 2 points:
    1. Is OS / SAP System Restart Required ??
    For changing the DNS and WINS IP address of ECC Server,
    whether the OS / SAP system has to be restarted ?
    2. Effect on SAP System / Setting Change required on SAP
    We would like to know whether there will be any issues with regard to SAP System Operation.
    Especially we would like to know whether any setting changes needs to be done in the SAP System also.
    Best Regards
    Raghunahth L

    Hi Anil,
    Thanks for your reply.
    That clears all my doubts.
    Thanks to Eric and to you for spending your valuable time.
    Best Regards
    Raghunahth L

  • Some Quick clarification about 2012 and always on Availability groups

    Hi guys, just need some clarification about always on.
    I've got plenty of experience with normal SQL Clusters, but just need some clarification around always on availability groups.
    I presume with AG, you setup a listener and this becomes your point of connection, IE this is what you use in the you connection string for you applications, so can I use this when I am setting up a new application, and will this automatically make the Database
    that's created by the APP, Highly available. ? or do you still have to add it to the AG afterwards. ?
    I have also read that you can still point to the installed SQL instance, and you don't need to use the AG group listener, but how does this make your DB HA ? how does the failover work.
    I also presume you don't need to use any roles under the MSC anymore.
    Kind regards
    Mark.G

    Hi Mark
    I presume with AG, you setup a listener and this becomes your point of connection
    That's right, the listener is a virtual network name and you can use this to connect to the primary or secondary replica. Your connections will go in against the primary unless you're using
    read-only routing. 
     so can I use this when I am setting up a new application, and will this automatically make the Database that's created by the APP, Highly available
    The first thing you'll have to do is set up the availability group (AG). You can then associate a listener to the AG. I know you're familiar with failover clustering but the mechanics of this are much closer to database mirroring. For every database that's
    part of the AG you'll have at least one secondary replica, it's possible to automatically failover to this if you're in synchronous mode.
    I have also read that you can still point to the installed SQL instance, and you don't need to use the AG group listener, but how does this make your DB HA ? how does the failover work.
    Yes you can do, but from an application perspective you should only do this for databases that are not part of an AG. If you connect to the instance directly and you have a failover your app will no be able to connect to the database (on the assumption your
    secondary isn't read-only). App connections should always be via the Virtual Network Name.
    I also presume you don't need to use any roles under the MSC anymore.
    Not sure I understand this? Do you mean will roles be available in cluster manager? Each AG group will have a role but failover is now controlled through the SQL Server rather than the cluster manager. 

  • Needs Clarification Regarding Segments and Datafiles

    Hi,
    I want clarification regarding Segments, Datafiles and extents.
    As we know that A segment is made of one or more extents and extents are composed of one or more datablocks in the HD.
    Since all data are store in Datafiles which are composed of extents and datablocks. I want to know weather a table(Segment) can span to multiple datafiles or in a sigle datafile.
    Regards,
    D.Abbasi

    And an easy way to check it by yourself :
    SQL> create tablespace abbasi_tbs
      2  datafile 'E:\ORADATA\DEMO111P\abbasi_01.dbf' size 1m autoextend off,
      3           'E:\ORADATA\DEMO111P\abbasi_02.dbf' size 1m autoextend off;
    Tablespace created.
    SQL> create table abbasi_tbl (id number)
      2  tablespace abbasi_tbs;
    Table created.
    SQL> insert into abbasi_tbl
      2  select rownum as rn
      3  from   dual
      4  connect by level <=10000;
    10000 rows created.
    SQL> commit;
    Commit complete.
    SQL> select distinct file_id
      2  from   dba_extents
      3  where  segment_name ='ABBASI_TBL';
       FILE_ID
             6
             7
    or...
    SQL> select distinct DBMS_ROWID.ROWID_RELATIVE_FNO(rowid)
      2  from   abbasi_tbl;
    DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID)
                                       6
                                       7
    SQL> select file_name from dba_data_files where file_id in (6,7);
    FILE_NAME
    E:\ORADATA\DEMO111P\ABBASI_01.DBF
    E:\ORADATA\DEMO111P\ABBASI_02.DBF
    SQL>Nicolas.
    added the ROWID function usage
    Edited by: N. Gasparotto on Jun 21, 2009 11:02 AM

  • Reg: Open and Closed Activities in DTR

    Hello Guru's,
    I am confused with the Open and Closed Activities in DTR, What are the significance of these activities?
    And when does an Open Activity changes to Closed Activity?
    And why does an Empty Open Activity gets created?
    How does the Open Activities effect the Performance of the system?
    In NWDS what is the significance of the Public Part and the dcdef xml files?
    Please explain and if possible send the links to better understand these? I tried searching for info but with no luck.
    Appreciate the Help.
    Thanks & Regards,
    Pramod

    Dear Pramod,
    Activity in general can be considered as the wrapper of your changes.
    While working in NWDI, any modification that you perform on your source code must be associated with an Activity.
    Some Definitions:
    Client DTR: Represents the Local Track file system present on you local machine, where Checked-out a desired code from Server DTR
    Server DTR: Represents the Central Repository present on the server, where your source code is checked-in.
    Open Activities
    These are the activities that are not yet checked-in to Server-DTR, and they represent the Open Version.
    Exmple:
    1) Suppose you have made some modification in "File A" which is associated in Activity-1
    2) When you open the DTR Perspective in your Developer studio, you will be able to see all the changes that you have associated in the Activity-1, this DTR-perspective of your NWDS is also termed as Client-DTR.
    3) But, when you open the Server-DTR in DTR Web interface, you will not be able to see the same file structure that is present in your client DTR, it will not contain the changes that you have incorporated in Activity-1.
    4) The changes present in Activity-A are not available to any other developers, even if you try to use the same developer ID and load the same track, the activities are still said to be purely Local.
    Closed Activities
    These are the activities that are already Checked-in to Server-DTR and they represent Closed Version.
    Example:
    1) In the same example explained above assume that we now Check-in the Activity-1.
    2) Now, if you open the Server-DTR in DTR Web Interface, you will be able to see the same file structure that is present in your Client-DTR.
    3) Now the changes that were present in Activity-A are also available to other developers too.
    Open Activity and Performance
    1) Open Activities create a temporary entry point reference on the Server DTR you can serach for open activities in Secer DTR web interface, every single activity occupies space on your local machine. So it may affect the performance only if the activity count is very high.
    2) In the local file system, the meta data is stored in the .dcdef file in the _comp
    subfolder.
    3) The meta data is stored in the .dcdef file, every public part has its own meta data file. They are located in the def subfolder of the component and have the fileextension pp.
    Every component (DC) is defined by a set of files, stored and versioned together with the
    component in the repository.
    u2022 A file with the reserved name .dcdef stores the basic attributes such as name,
    description and component type, the parent component (if there is one), a list
    of child components (if there are any), a list of dependencies, links to the component
    sources, and the access control list. The .dcdef file must be stored directly
    in the _comp folder.
    u2022 The folder _comp/def contains one file for each public part of the component,
    which carries the name of the public part with the extension .pp. This file contains a list
    of the development objects belonging to the public part and an access control list for
    this public part.
    Both .dcdef and .pp files are stored in an XML-based format; you can display them in the
    content display of the Repository Browser or with any text editor. However, this is
    not necessary. To create and edit these files, use the component tools of the SAP NetWeaver
    Developer Studio.
    In exceptions, you can manually repair the .dcdef or a .pp file of a component.
    Note that direct manipulations are dangerous and can render a component
    unusable.
    I hope you will get a clear picture now.
    Regards,
    Shreyas Pandya

  • Reg:Opening and Closing Stock For Plant

    Hi All,
      i want to calculate the opening stock and closing stock for plant wise for the given Posting Period Date.
    Am having the opening balance and closing balance for all months but i need it as month wise.
    considering this example,
    the date is from 15.01.2010 to 15.03.2010 and for one plant
    considering this example,
    am having the opening stock from as 15.01.2010  to 15.03.2010 as 10,000 and the closing stock as 15,000.
    but i need in month wise like below,
    from 15.01.2010 to 31.01.2010 what is the opening stock and closing stock?
    from 01.02.2010 to 28.02.2010 what is the opening stock and closing stock?
    from 01.03.2010 to 15.03.2010 what is the opening stock and closing stock?
    this is purely based on plant wise and not material wise?
    could anybody say clearly how can i acheive this scenario?
    i have referred MB5b,mc.9 transaction programs but still confused how to go with plant wise for a particular period?
    Thanks & Regards,
    Suresh
    Edited by: suresh suresh on Mar 24, 2010 10:15 AM

    Hi,
    Refer to link below:
    http://help.sap.com/erp2005_ehp_04/helpdata/EN/c1/3766e7449a11d188fe0000e8322f96/frameset.htm
    Regards,
    Venkat.

Maybe you are looking for

  • After browser updates .dwt files suddenly no longer viewable in browser - why?

    I am working on a Mac (Leopard OSX Version 10.5.7) and this morning, after various routine software updates this morning (see below) I went to the website I'm working on to start work on another template file - and discovered I couldn't view the file

  • Can't record with Sound Blaster Live Va

    I have a Sound Blaster Li've Value (on-board) card and can't record streaming audio with it. It plays streaming audio fine, and seems to record it, but the resulting wav or mp3 files are "empty", playing them results in silence. I noticed that there

  • NETWORK CONNECTION TIMED OUT!

    Please help! Yesterday I was unable to download itunes (thanks to the wisdom of Matt DeSalvo), but am still unable to access the music store. An error message (iTunes could not connect to the Music Store. The network timed out) appeared suggesting th

  • 975X PUE: SATA vs IDE Combinations: 2+2, 4, or 1+3?

    Hello, I searched but didn't find info.  For a new build with a 975X PUE, what is the optimal configuration of IDE/SATA drives.  I want to have a configuation like: DRIVE                 PURPOSE                    SATA or IDE? C:                     

  • Filtering Drop-down Values Adobe Interactive Form

    Hi, I am working on adobe interactive form, in which I have a normal drop-down. Now the requirement is, depending on the value of another field (country) , I need to display only the states pertaining to the country being displayed. In my drop-down I