Best Practice EJB 3.0 Question

I have a web application consisting of 3 projects:
- Model (EJB 3.0 Session Beans connected to two different databases)
- TagLibrary (custom tag library)
- ViewController (Web App / GUI)
Currently I am connecting to the EJB Beans using code that Jdeveloper generates for a test client:
env.put( Context.INITIAL_CONTEXT_FACTORY, "weblogic.jndi.WLInitialContextFactory" );
env.put(Context.PROVIDER_URL, "t3://localhost:7101");
However would like to move these to a properties file (I believe jndi.properties) such that they can be modified based on app server.
My question is following:
What is best practice for Session beans in the Model project to access other session beans in the same project? Do I also need to specify JNDI prop file and settings? (This occurs when Bean from one database needs to access bean from another database).
Or should I really put these in two separate projects / EJB libraries?
Thanks,
Kris

You have two options, first is to use JNDI lookup (you should be able to use just new InitialContext(), without the environment map).
Second one is more elegant and, as far as I'm concerned, should be referred to as best practice, that is using dependency injection:
@EJB
YourSesionBeanInterface yourEJB;
If you get stuck, there is plenty of documentation about this on the internet.
Pedja

Similar Messages

  • SAP Best Practices V 4.31 questions- Getting errors while mapping reports.

    Case 1:
    I am trying to map following reports from SAP BP V 4.31 to our data sources:
    a)POSD-Month-Material Group
    b)POSD-Month Suppliers
    c)POSD-Suppliers
    These are the 3 reports out of 9 reports under trade dashboard.
    We have followed the steps outlined in section 9 and used the code provided in appendix IX of the "Manual data source creation document"  ,
    These reports seem to use a field "DMBTR" :
    Report claims this field(DMBTR  ) would exist under following structure : ZBPBI131_STRU_TRADE_POSD
    But the document specified this(DMBTR) field as part of structure : ZBPBI131_STRU_TRADE_PO
    And the report is hence showing this field as orphaned and needs to be mapped while updating source.
    I am clueless what is wrong ( document and code given there (or) report version i got)  and how to map that DMBTR field to existing infoset and what impact it would create if i force it into the infoset declining the guiding document.
    Case 2:
    While mapping reports under" Sales Plan vs. Actual Dashboard" , I am getting error saying "Some tables were not found" ,even after  we created the Query ZBPBI131_QRYSD on top of its infoset in SAP.
    I am clueless what have gone wrong.We have followed exact steps as mentioned in the manual data source creation document.
    Any help from anyone is greatly appreciated.
    Thank you

    Ingo,
    I saw the same reply from you for  some other users post.
    Suggesting to post to All in One forums.
    I was unable to find that exact forum, could you point me with a link.
    infact if i can get to that forum, may be my question has been already answered.
    Your help is appreciated.
    Thank you

  • Data Federator XI 3.1 - best practices

    We are planning of rolling out a data federator setup in our company and I'm looking for some best practices.
    The major question I have is, do we install the data federator server components on a dedicated server or can/should we install the components on one of the machines of our BOE r3 cluster (4 nodes -> 2 mgmt and 2 processing)
    Is there any document that contains a summary of the best practices for setting up a data federator environment.
    Kind regards
    Guy

    Hello,
    the advice is to have a specific machine for the DF server.
    DF can become memory and CPU intensive for large queries so a dedicated machine allows to improve DF performances and avoid negative impacts on other services (e.g.BOE).
    A lot of calculation and temporary storage is done in memory so the advice is to add as much RAM as needed for large queries. If the RAM is not large enough you will have disk swap and hence you'll notice a lower performance.
    Hope that it helps
    Regards
    PPaolo

  • EDI - 753Routing Request and 754 Routing Instructions transactions- what SAP ECC6 EDI output type, msg type, msg code, basis idoc is best practice to generate the idoc?

    One of our Trading Partner wishes to implement the 753 Routing Request and 754 Routing Instructions.  Can anyone give me best practice answer the following Questions?  We are on SAP ECC6 R3
    What Application? i.e. V2, V7?
    What output condition to use?  i..e. LALE, LAT2, SEDI????
    What Message type?  i.e SHPADV?
    What Basic Idoc Type? i.e SHPMNT05?
    What Message Code?  i.e. SHPM?
    What Process Code?  i.e. SHPM??
    What Function Module? i.e. IDOC_OUTPUT_SHPMNT??
    Does the SAP Transpotation Module have to be configured for this to be implemented?
    Your comments are greatly appreciated, we have until Nov 1 to be compliant?

    Good Morning,
    While I did not get any responses from my question, yes, I was able to determine what needed to be configured.   I hope the below will help you: This is for SAP ECC6
    The output application for the 753 Routing Request is "V7"
    The output type is "SEDI"
    Transmission medium is "6"
    The Basic Idoc type is "Shipmnt05"
    when setting up your Partner Profile (WE20) add:Message Partner Role "SP" with Message type "SHPADV', Message coed "753", Basic Type "SHPMNT05", Message Control add Application "V7", Message type "SEDI", Process Code "SHPM".
    also http://scn.sap.com/thread/698368 pages 14 and 15 were helpful.
    I have not configured the Inbound 754.  For now the inbound 754 Routing Instructions will be emailed to our traffic department.
    Thank you,
    Have a great day,
    jane

  • Best Practice question - null or empty object?

    Given a collection of objects where each object in the collection is an aggregation, is it better to leave references in the object as null or to instantiate an empty object? Now I'll clarify this a bit more.....
    I have an object, MyCollection, that extends Collection and implements Serializable(work requirement). MyCollection is sent as a return from an EJB search method. The search method looks up data in a database and creates MyItem objects for each row in the database. If there are 10 rows, MyCollection would contain 10 MyItem objects (references, of course).
    MyItem has three attributes:
    public class MyItem implements Serializable {
        String name;
        String description;
        MyItemDetail detail;
    }When creating MyItem, let's say that this item didn't have any details so there is no reason to create MyitemDetail. Is it better to leave detail as a null reference or should a MyItemdetail object be created? I know this sounds like a specific app requirement, but I'm looking for a best practice - what most people do in this case. There are reasons for both approaches. Obviously, a bunch of empty objects going over RMI is a strain on resources whereas a bunch of null references is not. But on the receiving end, you have to account for the MyItemDetail reference to be null or not - is this a hassle or not?
    I looked for this at [url http://www.javapractices.com]Java Practices but found nothing.

    I know this sounds like a specific apprequirement,
    , but I'm looking for a best practice - what most
    people do in this case. It depends but in general I use null.Stupid.Thanks for that insightful comment.
    >
    I do a lot of database work though. And for that
    null means something specific.Sure, return null if you have a context where null
    means something. Like for example that you got no
    result at all. But as I said before its's best to
    keep the nulls at the perimeter of your design. Don't
    let nulls slip through.As I said, I do a lot of database work. And it does mean something specific. Thus (in conclusion) that means that, in "general", I use null most of the time.
    Exactly what part of that didn't you follow?
    And exactly what sort of value do you use for a Date when it is undefined? What non-null value do you use such that your users do not have to write exactly the same code that they would to check for null anyways?

  • Best Practice Question

    I have 3 Areas for my DWH
    The first area is Staging then validation and core
    Staging is just do load date from the source systems
    validation is to validate data (every city has to have a countrie ....)
    core is my DWH shema.
    The First step in ETL is to load the data from core to validation, let's say my GEO_DIM Dimension goes to Countries, Cities and Regions in core. Additionaly I build a CRC SUM when I downlaod from Core to Validation and store the CRC Checksum in a Staging table.
    The second step is to load target from the source systems to staging, but only those date that are non equal to the previous downloadet CRC schecksum, so only changed or new data going to staging.
    The third step is do load that new/changed data from staging to core and proof some dependences. It's just validation.
    My Question is, what is the best practic to bring three tables (Countries, cities and region) to one Dimension
    thanks and regards
    Andreas

    Andreas,
    I guess the correct is depends... Without kidding, are you planning to use a flat star table for this dimension? If that is the case you would be joining the sources together and loading this into the table.
    Now this sounds way to simple, so I guess there is something more to the question...
    Jean-Pierre

  • Best practice for a deplomyent (EAR containing WAR/EJB) in a productive environment

    Hi there,
    I'm looking for some hints regarding to the best practice deployment in a productive
    environment (currently we are not using a WLS-cluster);
    We are using ANT for buildung, packaging and (dynamic) deployment (via weblogic.Deployer)
    on the development environment and this works fine (in the meantime);
    For my point of view, I would like to prefere this kind of Deploment not only
    for the development, also for the productive system.
    But I found some hints in some books, and this guys prefere the static deployment
    for the p-system.
    My question now:
    Could anybody provide me with some links to some whitepapers regarding best practice
    for a deployment into a p-system ??
    What is your experiance with the new two-phase-deploment coming up with WLS 7.0
    Is it really a good idea to use the static deployment (what is the advantage of
    this kind of deployment ???
    THX in advanced
    -Martin

    Hi Siva,
    What best practise are you looking for ? If you can be specific on your question we could provide appropriate response.
    From my basis experience some of the best practices.
    1) Productive landscape should have high availability to business. For this you may setup DR or HA or both.
    2) It should have backup configured for which restore has been already tested
    3) It should have all the monitoring setup viz application, OS and DB
    4) Productive client should not be modifiable
    5) Users in Production landscape should have appropriate authorization based on SOD. There should not be any SOD conflicts
    6) Transport to Production should be highly controlled. Any transport to Production should be moved only with appropriate Change Board approvals.
    7) Relevant Database and OS security parameters should be tested before golive and enabled
    8) Pre-Golive , Post Golive should have been performed on Production system
    9) EWA should be configured atleast for Production system
    10) Production system availability using DR should have been tested
    Hope this helps.
    Regards,
    Deepak Kori

  • [CS5.5/6] - XML / Data Merge questions & Best practice.

    Fellow Countrymen (and women),
    I work as a graphic designer for a large outlet chain retailer which is constantly growing our base of centers.  This growth has brought a workload that used to be manageable with but two people to a never ending sprint with five.  Much of what we do is print, which is not my forte, but is also generally a disorganized, ad-hoc affair into which I am wading to try to help reduce overall strain.
    Upon picking up InDesign I noted the power of the simple Data Merge function and have added it to our repetoire in mass merging data sources.  There are some critical failures I see in this as a tool going forward for our purposes, however:
    1) Data Merge cannot handle information stored and categorized in a singular column well.  As an example we have centers in many cities, and each center has its own list of specific stores.  Data merge cannot handle a single column, or even multiple column list of these stores very easily and has forced us into some manual operations to concatenate the data into one cell and then, using delimiter characters, find and replace hard returns to seperate them.
    2) Data Merge offers no method of alternate alignment of data, or selection by ranges.  That is to say:  I cannot tell Data merge to start at Cell1 in one column, and in another column select say... Cell 42 as the starting point.
    3) Data merge only accepts data organized in a very specific, and generally inflexible pattern.
    These are just a few limitations.
    ON TO MY ACTUAL DILEMMA aka Convert to XML or not?
    Recently my coworker has suggested we move toward using XML as a repository / delivery system that helps us quickly get data from our SQL database into a usable form in InDesign. 
    I've watched some tutorials on Lynda.com and havent yet seen a clear answer to a very simple question:
    "Can XML help to 'merge' large, dynamic, data sets like a list of 200 stores per center over 40 centers based off of a single template file?"
    What I've seen is that I would need to manually duplicate pages, linking the correct XML entry as I go rather than the program generating a set of merged pages like that from Data Merge with very little effort on my part.  Perhaps setting up a master page would allow for easy drag and drop fields for my XML data?
    I'm not an idiot, I'm simply green with this -- and it's kind of scary because I genuinely want us to proceed forward with the most flexible, reliable, trainable and sustainable solution.  A tall order, I know.  Correct me if I'm wrong, but XML is that beast, no?
    Formatting the XML
    Currently I'm afraid our XML feed for our centers isnt formatted correctly with the current format looking as such:
    <BRANDS>
         <BRAND>
              • BrandID = xxxx
              [Brand Name]
              [Description]
              [WebMoniker]
              <CATEGORIES>
                   <CATEGORY>
                        • xmlns = URL
                        • WebMoniker = category_type
              <STORES>
                   <STORE>
                        • StoreID = ID#
                        • CenterID = ID#
    I dont think this is currently usable because if I wanted to create a list of stores from a particular center, that information is stored as an attribute of the <Store> tag, buried deep within the data, making it impossible to 'drag-n-drop'. 
    Not to mention much of the important data is held in attributes rather than text fields which are children of the tag.
    Im thinking of proposing the following organizational layout:
    <CENTERS>
         <CENTER>
         [Center_name]
         [Center_location]
              <CATEGORIES>
                   <CATEGORY>
                        [Category_Type]
                        <BRANDS>
                             <BRAND>
                                  [Brand_name]
    My thought is that if I have the <CENTER> tag then I can simply drag that into a frame and it will auto populate all of the brands by Category (as organized in the XML) for that center into the frame.
    Why is this important?
    This is used on multiple documents in different layout styles, and since our store list is ever changes as leases end or begin, over 40 centers this becomes a big hairy monster.  We want this to be as automated as possible, but I'd settle for a significant amount of dragging and dropping as long as it is simple and straightforward.  I have a high tollerance for druding through code and creating work arounds but my co-workers do not.  This needs to be a system that is repeatable and understandable and needs to be able to function whether I'm here or not -- Mainly because I would like to step away from the responsibility of setting it up every time
    I'd love to hear your raw, unadulterated thoughts on the subject of Data merge and XML usage to accomplish these sorts of tasks.  What are your best practices and how would you / do you accomplish these operations?
    Regards-
    Robert

    From what I've gleaned through watching Lynda tutorials on the subject is that what I'm hoping to do is indeed possible.
    Peter, I dont disagree with you that there is a steep learning curve for me as the instigator / designer of this method for our team, but in terms of my teammates and end-users that will be softened considerably.  Even so I'm used to steep learning curves and the associated frustrations -- but I cope well with new learning and am self taught in many tools and programs.
    Flow based XML structures:
    It seems as though as long as the initial page is set up correctly using imported XML, individual data records that cascade in a logical fashion can be flowed automatically into new pages.  Basically what you do is to create an XML based layout with the dynamic portion you wish to flow in a single frame, apply paragraph styles to the different tags appropriately and then after deleting unused records, reimport the XML with some specific boxes checked (depending on how you wish to proceed).
    From there simply dragging the data root into the frame will cause overset text as it imports all the XML information into the frame.  Assuming that everything is cascaded correctly using auto-flow will cause new pages to be automatically generated with the tags correctly placed in a similar fashion to datamerge -- but far more powerful and flexible. 
    The issue then again comes down to data organization in the XML file.  In order to use this method the data must be organized in the same order in which it will be displayed.  For example if I had a Lastname field, and a Firstname field in that order, I could not call the Firstname first without faulting the document using the flow method.  I could, however, still drag and drop content from each tag into the frame and it would populate correctly regardless of the order of appearance in the XML.
    Honestly either method would be fantastic for our current set of projects, however the flow method may be particularly useful in jobs that would require more than 40 spreads or simple layouts with huge amounts of data to be merged.

  • Best Practice for EJB calls from servlet?

    Hi folks
    I could not find general rules for making calls to an stateful EJB from the web container (e.g. from a backingBean). In some books they say it is a bad programming style calling them directly from a common servlet. The book says create first an HTTPSession Object and from there call the stateful EJB.
    I'm a bit confused because, I'm missing some best practice guide from where to initiate such calls.
    Can somebody please point me in the right direction?
    Kind Regards
    Bruno
    Edited by: zajoho on Oct 30, 2008 11:14 PM

    Hi Bruno,
    The main issue with the combination of stateful session beans and servlets is the servlet threading model.
    It is dangerous to store a stateful session bean reference in servlet instance state, since the servlet instance
    can be accessed concurrently, yet a stateful session bean reference is intended to be used by only one
    client.
    As you point out, one alternative is to store the reference in the HttpSession. That associates the reference
    with a particular client, which matches the stateful session bean programming model.

  • Best practice for customizing EJB property after deployment

    Hi Gurus,
      What is the best practice for customizing EJB property after deployment in NW7.1? I have a stateless session bean and it needs to get some environment information before acting. While the information can only be known at runtime. What should I do to achieve it? I thought I can bind the property with a JNDI context but I did not find out where to declare and change the context value. Please advise. Thanks.
    B.R.

    Hi.
    I have a similar problem. But I still can not edit the properties of the ejb-jar.xml.
    I tried to stop the web service, but the properties still remain unmodifiable.
    Could you advise me how to change them?
    We have installed SAP Server 7.0.2

  • Informatica and Essbase Best Practice questions

    We now have the Informatica adapter for Essbase installed and working. We have been able to get Informatica to upload data successfully. Now I have a few questions that I have not been able to find answers to in any documentation or forums for Informatica or Essbase. I have submitted these same questions to the Informatica Support but thought I would also post the questions here to see if many folks are using Informatica against Essbase.
    We are using:
    Informatica 8.6.1 (Linux)
    Essbase 11.1.1.3 (Windows 2003)
    1) I can see in Informtica that when we load data to Essbase (Target) it gives me the option to run a calc script AFTER it loads the data. However, if I need to run a Calc script BEFORE the load to Essbase (Target) what is the best practice? The work around I have found was to add the same session twice and for the 1st instance select the option to 'ONLY RUN THE CALC SCRIPT' on the mapping tab. The problem with this is the log shows that it will still run the query against the Source tables. This will impact run times and double to querying against the Source database. What is the Best Practice and proper way to build the workflow to Run a Calc Script BEFORE the load?
    2)Since you do not see the list of Calc Scripts for Essbase in Informatica (you have to manually type the Calc name), If I want to run the 'Default' calc for Essbase what is the syntax to run the 'Default' Calc Script? Tried 'Default' but didn't seem to work.
    3)I have other tasks in Essbase I want to do before actually having Informatica load the data. I would like to run the MAXL commands via a Command task. What is the Best Practice for doing this and the syntax to run MAXL commands in a Command Task in Informatica? I previously had Shell scripts built on the Informatica server that would be kicked off within Informatica, but we are trying to move away from shell scripts and instead have the scripting codes IN the workflows/sessions to make it easier to review the code and follow the logic, rather than having to find the scripts and open each of them.
    Any assistance you have with the two products working together I would GREATLY appreciate it!
    Robert

    AS i know addUser(User user){ ... } is much more useful for several reasons:
    1.Its object oriented
    2.its easy to write , because if Object has many parameters its very painful to write method with comma seperated parameters

  • Question - Best practice data source for Vs2008 and Crystal Reports 2008

    I have posted a question here
    CR2008 using data from .NET data provider (ADO.NET DATASET from a .DLL)
    but think that perhaps I need general community advise on best practice with data sources.
    In Crystal reports I can choose the data source location from any number of connection types, eg ado.net(xml), com, oledb, odbc.
    Now in regard to the post, the reports have all been created in Crxi 6.3, upgraded to Crystal XI and now Im using the latest and greatest. I wrote the Crystal Reports 6.3/ XI reports back in the day to do the following: The Reports use a function from COM Object which returns an ADO recordset which is then consumed fine.
    So I don't want to rewrite all these reports, of which there are many.
    I would like to know if any developers are actually using .NET Class libraries to return ADO.NET datasets via the method call or if you are connecting directly to XML data via whatever source ( disk, web service, http request etc).
    I have not been able to eliminate the problem listed in the post mentioned above, which is that the Crystal Report is calling the .NET class library method twice before displaying the data. I have confirmed this by debugging the class lib.
    So any guidance or tips is appreciated.
    Thanks

    This is already being discuss in one of your other threads. Let's close this one out and concentrate on the one I've already replied to.
    Thanks

  • Best Practices Question: How to send error message to SSHR web page.

    Best Practices Question: How to send error message to SSHR web page from custom PL\SQL procedure called by SSHR workflow.
    For the Manager Self-Service application we’ve copied various workflows which were modified to meet business needs. Part of this exercise was creating custom PL\SQL Package Procedures that would gather details on the WF using them on custom notification sent by the WF.
    What I’m looking for is if/when the PL\SQL procedure errors, how does one send an failure message back and display it on the SS Page?
    Writing information into a log or table at the database level works for trouble-shooting, but we’re looking for something that will provide the end-user with an intelligent message that the workflow has failed.
    Thanks ahead of time for your responses.
    Rich

    We have implemented the same kind of requirement long back.
    We have defined our PL/SQL procedures with two OUT parameters
    1) Result Type (S:Success, E:Error)
    2) Result Message
    In the PL/SQL procedure we always use below construct when we want to raise any message
    hr_utility.set_message(APPL_NO, 'FND_MESSAGE_NAME');
    hr_utility.raise_error;
    In Exception block we write below( in successful case we just set the p_result_flag := 'S';)
    EXCEPTION
    WHEN APP_EXCEPTION.APPLICATION_EXCEPTION THEN
    p_result_flag := 'E';
    p_result_message := hr_utility.get_message;
    WHEN OTHERS THEN
    p_result_flag := 'E';
    p_result_message := hr_utility.get_message;
    fnd_message.set_name('PER','FFU10_GENERAL_ORACLE_ERROR');
    fnd_message.set_token('2',substr(sqlerrm,1,200));
    fnd_msg_pub.add;
    p_result_message := fnd_msg_pub.get_detail;
    After executing the PL/SQL in java
    We have written some thing similar to
    orclStmt.execute();
    OAExceptionUtils.checkErrors (txn);
    String resultFlag = orclStmt.getString(provide the resultflag bind no);
    if ("E".equalsIgnoreCase(resultFlag)){
    String resultMessage = orclStmt.getString(provide the resultMessage bind no);
    orclStmt.close();
    throw new OAException(resultMessage, OAException.ERROR);
    It safely shows the message to the user with all the data in the page.
    We have been using this construct for a long time for all our projects. They are all working as expected.
    Regards,
    Peddi.

  • Quick question regarding best practice and dedicating NIC's for traffic seperation.

    Hi all,
    I have a quick question regarding best practice and dedicating NIC's for traffic seperation for FT, NFS, ISCSI, VM traffic etc.  I get that its best practice to try and separate traffic where you can and especially for things like FT however I just wondered if there was a preferred method to achieving this.  What I mean is ...
    -     Is it OK to have everything on one switch but set each respective portgroup to having a primary and failover NIC i.e FT, ISCSI and all the others failover (this would sort of give you a backup in situations where you have limited physical NICs.
    -    Or should I always aim to separate things entirely with their own respective NICs and their own respective switches?
    During the VCAP exam for example (not knowing in advance how many physical NIC's will be available to me) how would I know which stuff I should segregate on its own separate switch?  Is there some sort of ranking order of priority /importance?  FT for example I would rather not stick on its own dedicated switch if I could only afford to give it a single NICs since this to me seems like a failover risk.

    I know the answer to this probably depends on however many physical NICs you have at your disposal however I wondered if there are any golden 100% rules for example FT must absolutely be on its own switch with its own NICs even at the expence of reduced resiliency should the absolute worst happen?  Obviously I know its also best practice to seperate NICs by vender and hosts by chassis and switch etc 

  • Best practice question -- copy container, assemble it, build execution plan

    So, this is a design / best practice question:
    I usually copy containers as instructed by docs
    I then set the source system parameters
    I then generate needed parameters / assemble the copied container for ALL subject areas present in the container
    I then build an execution plan JUST FOR THE 4 SUBJECT AREAS and build the execution plan and set whatever is needed before running it.
    QUESTION - When i copy the container, should i delete all not needed subject areas out of it or is it best to do this when building the execution plan? I am basically trying to simplify the container for my own sake and have the container just have few subject areas rather than wait till i build the execution plan and then focus on few subject areas.
    Your thoughts / clarifications are appreciated.
    Regards,

    Hi,
    I would suggest that you leave the subject areas and then just don't include them in the execution plan. Otherwise you have the possibility of running into the situation where you need to include another subject area in the future and you will have to go through the hassle of recreating it in your SSC.
    Regards,
    Matt

Maybe you are looking for

  • Time Machine - ambiguous error message, cannot backup

    I've been without use of Time Machine for sometime (months) now.  I've been getting the following error: My backup drive (Seagate FreeAgent GoFlex - 320GB) was a attached to my Airport Extreme, but in desperation I moved it to being directly attached

  • G5 wont mount DVI or ADC, only VGA monitors

    I have a g5 that has a dual display DVI/ADC card; It wont show anything on a new Apple Cinema DVI display, but it WILL show up on a VGA monitor with a DVI-VGA adapter. I tried the appple cinema display on another mac and it worked fine, then i tried

  • Oracle database downgrade  from 11.2.0.3 to 10.2.0.4 and timezone files

    Hi, I would like to test downgrade process through the scripts (@catdwgrd.sql). Oracle documentation http://docs.oracle.com/cd/E18283_01/server.112/e17222/downgrade.htm says "2.If you previously installed a recent version of the time zone file and us

  • Typing problem in Dreamweaver? (Desperate for help ASAP!!)

    Hey guys sorry to bother but I'm new to Dreamweaver 8 and I'm so frustrated right now! D: I tried using a Photoshop addon called Sitegrinder 3 to design a website and let Sitegrinder automatize the coding project of converting from PSD to HTML since

  • For help! Express connect to Database!

    Happy new year everyone! When i in Express Administrator,connect to Oracel 9 relational database,some errors as follows: An abend occurred while your request was being processed! As a safety measure,database operations have been disabled,Pleased call