Loading an object - best practice

i'm currenlty loading a collection of objects and trying to
determine which is the best way to "load" the object. Should i be
calling set functions on the object and specifying the values or
should i simply call a load function on the object and pass it the
data through parameters. The data that i would be sending via
params would be an xml object. My reasoning for the second method
of loading is to blackbox the object...basically throw a data
object into a load function and the object itself would know how to
parse the data and set its own properties.
thoughts?

anyone?

Similar Messages

  • Request/session objects - Best Practice

    I have a simple scenario which I think other people will recognise, and am wondering what the best pattern to use in JSF is:
    I have a summary page which displays orders for the current user in a dataTable. This has a backing bean called orders.
    When the user clicks on an order, it calls an action on the orders object which fetches the specific order from the database and navigates to a second page to display details about the order.
    I don't mind the orders object being a session bean, but I don't really want the order bean to be session, it needs to be a request bean.
    How do I place the order bean somewhere so that it is a request bean on the second page?
    In ASP.NET I could place it in the ViewState before transferring control to the second page, or temporarily put it on the session, then pull it off the session when the second page loads the first time.
    The problem with putting the order object in the session is that it never goes away, and might be confused if the user has multiple browser windows open trying to look at 2 orders at the same time.

    Here's the way I do this kind of thing.
    In this case, I'd have a session bean called orders. It's got an orders property that will return all the orders for display in a dataTable. It's got a reference to a session-scoped bean that contains the id of the currently selected order. When the user selects an order (typically by clicking a commandLink) the id of the selected order is set in a session scoped bean called orderDetailsOptions and the user is navigated to the order details page. I'd have a Refresh button on the page that causes the orders to be reloaded.
    public class OrdersBean {
      private OrderDetailsOptionsBean orderDetailsOptions;
      private DataModel orders;
      private void reset() {
        orders = null;
      public OrdersBean() {
        reset();
      public void setOrderDetailsOptions( OrderDetailsOptionsBean orderDetailsOptions ) {
        this.orderDetailsOptions = orderDetailsOptions;
      public DataModel getOrders() {
        if ( orders == null ) {
          ResultSet rs = doQuery();
          orders = new ResultDataModel( ResultSupport.toResult( rs ) );
        return orders;
      /* Actions */
      public String orderSelected() {
        Map row_data = (Map) orders.getRowData();
        String order_id = orders.get( "orderId" );
        orderDetailsOptions.setOrderId( order_id );
        reset();
        return "toOrderDetails";
      public String refresh() {
        reset();
        return "success";
    }The OrderDetailsOptionsBean for the session holds the id of the currently selected order.
    public class OrderDetailsOptionsBean {
      private String order_id;
      public void setOrderId( String order_id ) {
        this.order_id = order_id;
      public String getOrderId() {
        return order_id;
    }The OrderDetailsBean is a request bean to get the details for the selected order.
    public class OrderDetailsBean {
      private OrderDetailsOptionsBean options;
      private String order_id = null;
      private Map fields = null;
      public void setOptions( OrderDetailsOptionsBean options ) {
        this.options = options;
      public String getOrderId() {
        if ( order_id == null ) {
          order_id = options.getOrderId();
      public Map getFields() {
        if ( fields == null ) {
          getOrderId();
          // Do the query.
        return fields;
    }Then here's what's in faces-config for this:
    <managed-bean>
      <managed-bean-name>orders</managed-bean-name>
      <managed-bean-class>OrderBean</managed-bean-name>
      <managed-bean-scope>session</managed-bean-scope>
      <managed-property>
        <property-name>orderDetailsOptions</property-name>
        <property-class>OrderDetailsOptionsBean</property-class>
        <value>#{orderDetailsOptions}</value>
      </managed-property>
    </managed-bean>
    <managed-bean>
      <managed-bean-name>orderDetailsOptions</managed-bean-name>
      <managed-bean-class>OrderDetailsOptionsBean</managed-bean-name>
      <managed-bean-scope>session</managed-bean-scope>
    </managed-bean>
    <managed-bean>
      <managed-bean-name>orderDetails</managed-bean-name>
      <managed-bean-class>OrderDetailsBean</managed-bean-name>
      <managed-bean-scope>request</managed-bean-scope>
      <managed-property>
        <property-name>options</property-name>
        <property-class>OrderDetailsOptionsBean</property-class>
        <value>#{orderDetailsOptions}</value>
      </managed-property>
    </managed-bean>
    <navigation-rule>
      <from-view-id>orders.jsp</from-view-id>
      <navigation-case>
        <from-outcome>toOrderDetails</from-outcome>
        <to-view-id>orderDetails.jsp</to-view-id>
        <redirect />
      </navigation-case>
    </navigation-rule>

  • Loading Snow Leopard, Best Practices Advice

    Currently w/Leopard on all machines, with Adobe CS5 and (cocoa) 64 bit, I'll soon need to move to SL -- It does make me nervous though as things are going relatively well with the current OS and one of my machines is in a production environment. I do have multiple volumes on each MP and will start with the "less critical" ones of course.
    Is erase and install the best method or can I install over Leopard and still have the similar results as if I had loaded it on a cleanly formatted volume? Other caveats?
    TIA,
    Geoff

    If you have various critical applications that are currently working, another option would be to create another partition and install a new copy of Snow Leopard there. Then you can move things over gradually, and if something doesn't work, you can just boot into the Leopard partition - the best of both worlds (I am currently running my MBP this way, although I don't run that many third party apps).

  • HCM Best Practice LOAD - Error in Copy Report variants Step

    Dear Friends,
    We are trying to use/load the HCM Best Practice data for testing purpose. We have applied the HCM Best Practice Add -ON. It went successfully. When we try to execute the Preparation step - (in Copy Varient Phase) using Tcode /HRBPUS1/C004_K01_01 we are getting the following error.
    <b>E00555 Make an entry in all required fields</b>
    Request you to provide some solution for the same.
    Thanks and regards,
    Abhilasha

    Hi Sunny and others,
    The main error here was sapinst couldn´t find and read the cntrlW01.dbf because this file was not on that location (/oracle/W01/112_64/dbs).
    I already solved this issue... what I did was:
    1º) As user ora I went in sqlplus as sysdba and I ran the following script (the control.sql script that was generated at the begining of the system copy process):
    SQL> @/oracle/CONTROL.SQL
    Connected.
    ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
    ORA-01081: cannot start already-running ORACLE - shut it down first
    Control file created.
    Database altered.
    Tablespace altered.
    2º) This is very important, is necessary to know where is the .CTL that was created with the script, so I checked what was the value of the parameter control_files in that moment:
    SQL> show parameter control_files;
    /oracle/W01/oraflash/W01/controlfile/o1_mf_6pqvl4jt_.ctl
    3º) Next, logged with ora, for the sapinst read the value that it needed, I copied this file to the location/path that is needed for sapinst could read:
    cp /oracle/W01/oraflash/W01/controlfile/o1_mf_6pqvl4jt_.ctl /oracle/W01/112_64/dbs/cntrlW01.dbf
    4º) I ran the sapinst from the point he had stopped.
    Thank you for your help.
    Kind regards,
    João Dimas - Portugal

  • Is construction of webi directly in production a best practice?

    with bex-query and universes well consolidated and tested by a IT group,
    can be considered the construction of webis directly in production without going through test and quality a Business Objects best practice?
    is possible allow end-users (non IT personal) construct this webis?
    Is there a document of good practices that SAP made this recommendation?.
    thanks in advance by the answer.
    Ramón Mediero

    If universe and all has been tested and signed-off; also end user are familiar with Webi report development and they want their ad-hoc reports instead of pre-developed report set; There will be no issue to allowing end user to develop the Webi reports in production. However there we have to take care of few points like
    > need to check where report creation in production's public folder is feasible or not ? If yes how? is we need to create separate folders for individual user or what else ? and if No then what will be alternative like they can create in favorite folder?
    > also we need to take control on number of report that users will create. however may be users will create so many reports with huge amount of data refreshes and etc and PROD will face performance issues etc etc...
    like this there can be so many considerations needs to consider
    Hope this will give u some idea...
    Vills

  • Best practice - caching objects

    What is the best practice when many transactions requires a persistent
    object that does not change?
    For example, in a ASP model supporting many organizations, organization is
    required for many persistent objects in the model. I would rather look the
    organization object up once and keep it around.
    It is my understanding that once the persistence manager is closed the
    organization can no longer be part of new transactions with other
    persistence managers. Aside from looking it up for every transaction, is
    there a better solution?
    Thanks in advance
    Gary

    problem with using object id fields instead of PC object references in your
    object model is that it makes your object model less useful and intuitive.
    Taking to the extreme (replacing all object references with their IDs) you
    will end up with object like a row in JDBC dataset. Plus if you use PM per
    HTTP request it will not do you any good since organization data won't be in
    PM anyway so it might even be slower (no optimization such as Kodo batch
    loads)
    So we do not do it.
    What you can do:
    1. Do nothing special just use JVM level or distributed cache provided by
    Kodo. You will not need to access database to get your organization data but
    object creation cost in each PM is still there (do not forget this cache we
    are talking about is state cache not PC object cache) - good because
    transparent
    2. Designate a single application wide PM for all your read-only big
    things - lookup screens etc. Use PM per request for the rest. Not
    transparent - affects your application design
    3. If large portion of your system is read-only use is PM pooling. We did it
    pretty successfully. The requirement is to be able to recognize all PCs
    which are updateable and evict/makeTransient those when PM is returned to
    the pool (Kodo has a nice extension in PersistenceManagerImpl for removing
    all managed object of a certain class) so you do not have stale data in your
    PM. You can use Apache Commons Pool to do the pooling and make sure your PM
    is able to shrink. It is transparent and increase performance considerably
    One approach we use
    "Gary" <[email protected]> wrote in message
    news:[email protected]...
    >
    What is the best practice when many transactions requires a persistent
    object that does not change?
    For example, in a ASP model supporting many organizations, organization is
    required for many persistent objects in the model. I would rather look the
    organization object up once and keep it around.
    It is my understanding that once the persistence manager is closed the
    organization can no longer be part of new transactions with other
    persistence managers. Aside from looking it up for every transaction, is
    there a better solution?
    Thanks in advance
    Gary

  • Best practice to Consolidate forecast before loading into ASCP

    Hi,
    Can anyone suggest best practice to consolidate forecast that is in spreadsheets. Forecast comes from Sales, Finance and Operations. Then consolidated forecast should be loaded into ASCP.
    Is there any way to automate the load?
    Is Oracle S&OP best product out there?
    Do we need Oracle Demand management also?
    Regards

    Forecast comes from Sales, Finance and Operations (spreadsheets)
    -> Using integration interfaces to load the data in to three different series sales fcst, finance fcst and ops fcst
    Then consolidated forecast should be loaded into ASCP.
    -> create a workflow/ii to load the consolidated of the 3 series
    So this can be done in DM.
    Also a standard workflow exists in S&OP to publish consensus forecast to ASCP which accomplish your objective.

  • Best practice for lazy-loading collection once but making sure it's there?

    I'm confused on the best practice to handle the 'setup' of a form, where I need a remote call to take place just once for the form, but I also need to make use of this collection for a combobox that will change when different rows in the datagrid or clicked. Easier if I just explain...
    You click on a row in a datagrid to edit an object (for this example let's say it's an "Employee")
    The form you go to needs to have a collection of "Department" objects loaded by a remote call. This collection of departments only should happen once, since it's not common for them to change. The collection of departments is used to populate a form combobox.
    You need to figure out which department of the comboBox is the selectedIndex by iterating over the departments and finding the one that matches the employee.department.id
    Individually, I know how I can do each of the above, but due to the asynch nature of Flex, I'm having trouble setting up things. Here are some issues...
    My initial thought was just put the loading of the departments in an init() method on the employeeForm which would load as creationComplete() event on the form. Then, on the grid component page when the event handler for clicking on a row was fired, I call a setup() method on my employeeForm which will figure out which selectedIndex to set on the combobox by looking at the departments.
    The problem is the resultHandler for the departments load might not have returned (so the departments might not be there when 'setUp' is called), yet I can't put my business logic to determine the correct combobox in the departmentResultHandler since that would mean I'd always have to fire the call to the remote server object every time which I don't want.
    I have to be missing a simple best practice? Suggestions welcome.

    Hi there rickcr
    This is pretty rough and you'll need to do some tidying up but have a look below.
    <?xml version="1.0"?>
    <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute">
        <mx:Script>
            <![CDATA[
                import mx.controls.Alert;
                import mx.collections.ArrayCollection;
                private var comboData:ArrayCollection;
                private function setUp():void {
                    if (comboData) {
                        Alert.show('Data Is Present')
                        populateForm()
                    } else {
                        Alert.show('Data Not')
                        getData();
                private function getData():void {
                    comboData = new ArrayCollection();
                    // On the result of this call the setUp again
                private function populateForm():void {
                    // populate your form
            ]]>
        </mx:Script>
        <mx:TabNavigator left="50" right="638" top="50" bottom="413" minWidth="500" minHeight="500">
            <mx:Canvas label="Tab 1" width="100%" height="100%">
            </mx:Canvas>
            <mx:Canvas label="Tab 2" width="100%" height="100%" show="setUp()">
            </mx:Canvas>
        </mx:TabNavigator>
    </mx:Application>
    I think this example is kind of showing what you want.  When you first click tab 2 there is no data.  When you click tab 2 again there is. The data for your combo is going to be stored in comboData.  When the component first gets created the comboData is not instansiated, just decalred.  This allows you to say
    if (comboData)
    This means if the variable has your data in it you can populate the form.  At first it doesn't so on the else condition you can call your data, and then on the result of your data coming back you can say
    comboData = new ArrayCollection(), put the data in it and recall the setUp procedure again.  This time comboData is populayed and exists so it will run the populate form method and you can decide which selected Item to set.
    If this is on a bigger scale you'll want to look into creating a proper manager class to handle this, but this demo simple shows you can test to see if the data is tthere.
    Hope it helps and gives you some ideas.
    Andrew

  • Best Practices - Update Explorer Properties of BW Objects

    What are some best practices of using the process chain type "Update Explorer Properties of BW Objects"?
    We have the option of updating Conversion Indexes, Hierarchy Indexes, Authorization Indexes, and RKF/CKF Indexes.
    When should we run each update process?
    Here are some options we're considering:
    Conversion Indexes - Run this within InfoCube load process chains that contain currency conversions within explorer objects.
    Hierarchy Indexes - When would this need to be run? Does this need to be run for PartProviders and/or Snapshots? Do ACRs handle this update? Should this be run within InfoCube load chains, or after ACRs?
    Authorization - We plan to run this a couple times a day for all explorer objects.
    RKF/CKS - Does this need to run after InfoCube loads? With PartProvider and/or Snapshot indexes? After transports have completed?
    Thanks,
    Cote

    Does anyone productively use explorer and this process type for process chains?

  • Bean load best practice

    I am not new to Java, but up until now have been a programmer. I am now getting more into design and architecture and have a question about best practice. This question arises from a UML class I was taking. But in the class we stayed within the UML and did not get into implementation.
    My Question
    When creating classes and designing how they interact, what is the best practice for implementing associative relationships. For example, if I were modeling a Barn that contained Animals, I would create a Barn bean and an Animal bean. Since the Barn contained Animals I could create the code like this:
    public class Barn {
    String color;
    Collection animals;
    void setColor(String newcolor);
    String getColor( );
    void setAnimals(Collection newanimals);
    Collection getAnimals( );
    public class Animals{
    String name;
    void setName(String newname);
    String getName( );
    }The Collection within the Barn bean would be made up of Animal beans.
    This seems fairly straight forward. However, what if I loaded the bean from a database? When building the bean, do I also find all animals and build the Animal beans and create the Collection to store within the Barn object?
    Or
    Do I omit the animal Collection from my Barn bean and only populate the Collection at runtime, when someone calls the getAnimals method?
    I am confident that the latter is the better design for performance and synchonization issues. But I wanted to get other opinions.
    Do I need to read up more on object design?
    Thanks,
    Lonnie

    And lazy initialization. Basically, unless the data is needed. Don't load it.

  • Flat File load best practice

    Hi,
    I'm looking for a Flat File best practice for data loading.
    The need is to load a flat fle data into BI 7. The flat file structure has been standardized, but contains 4 slightly different flavors of data. Thus, some fields may be empty while others are mandatory. The idea is to have separate cubes at the end of the data flow.
    Onto the loading of said file:
    Is it best to load all data flavors into 1 PSA and then separate into 4 specific DSOs based on data type?
    Or should data be separated into separate file loads as early as PSA? So, have 4 DSources/PSAs and have separate flows from there-on up to cube?
    I guess pros/cons may come down to where the maintenance falls: separate files vs separate PSA/DSOs...??
    Appreciate any suggestions/advice.
    Thanks,
    Gregg

    I'm not sure if there is any best practise for this scenario (Or may be there is one). As this is more data related to a specific customer needs. But if I were you, I would handle one file into PSA and source the data according to its respective ODS. As that would give me more flexibility within BI to manipulate the data as needed without having to involve business for 4 different files (chances are that they will get them wrong  - splitting the files). So in case of any issue, your trouble shooting would start from PSA rather than going thru the file (very painful and frustating) to see which records in the file screwed up the report. I'm more comfortable handling BI objects rather than data files - coz you know where exactly you have look.

  • Pool : best practice ODI : PLSQL or Interface object ?

    Hello,
    My ODI consultant has developped an interface to load a flat file into Hyperion Planning :
    * first step : load flat file into staging : done with "Interface" object
    * second step : transform staging table (1,2,3 ==> JAN, FEB, MAR // transform "-" into ND_Customer ... very easy transformation !) : done trough a PLSQL Procedure. Result is load into FACT_TABLE
    * third step : load FACT_TABLE into ESSBASE : done with "interface" object
    During design, we didn't discuss the technology, but after the build, I'm very suprised by the second step. There is no justification to do it with PLSQL. My consultant explains me : "I'd rather to use PLSQL". But from my point of view, ODI best practice is to use "Interface" (more flexible, you can change the topology without impact in interface etc ...)
    What is your point of view? Should I raise an issue and expect from my consultant a rewriting with "interface" object?
    Rgds

    Thx SH, the complexity (use of two intermediate tables : STAGING and FACT) is due to our requirment to archive the original data during one year (in STAGING) and to give an audit trail from Essbase to original data (before transformation). From Essbase we could go back to FACT Table (same member name) then goes back to STAGING by using and unique ID that produces a link between tables.
    From my point of view ODI Interface is the simplier way to maintain the "mapping", instead of PLSQL, but I would have more feedbacks from other developper to be sure of my feeling (I've done only 2 Hyperion Planning + ODI Project before the current one).
    The complexity of interface are low or medium : simple filter on one or two dimensions / DECODE mapping on Month / group by on similar records / for few interfaces, more complexe rules with IF statement.
    Thx in adavance

  • Best Practice question - null or empty object?

    Given a collection of objects where each object in the collection is an aggregation, is it better to leave references in the object as null or to instantiate an empty object? Now I'll clarify this a bit more.....
    I have an object, MyCollection, that extends Collection and implements Serializable(work requirement). MyCollection is sent as a return from an EJB search method. The search method looks up data in a database and creates MyItem objects for each row in the database. If there are 10 rows, MyCollection would contain 10 MyItem objects (references, of course).
    MyItem has three attributes:
    public class MyItem implements Serializable {
        String name;
        String description;
        MyItemDetail detail;
    }When creating MyItem, let's say that this item didn't have any details so there is no reason to create MyitemDetail. Is it better to leave detail as a null reference or should a MyItemdetail object be created? I know this sounds like a specific app requirement, but I'm looking for a best practice - what most people do in this case. There are reasons for both approaches. Obviously, a bunch of empty objects going over RMI is a strain on resources whereas a bunch of null references is not. But on the receiving end, you have to account for the MyItemDetail reference to be null or not - is this a hassle or not?
    I looked for this at [url http://www.javapractices.com]Java Practices but found nothing.

    I know this sounds like a specific apprequirement,
    , but I'm looking for a best practice - what most
    people do in this case. It depends but in general I use null.Stupid.Thanks for that insightful comment.
    >
    I do a lot of database work though. And for that
    null means something specific.Sure, return null if you have a context where null
    means something. Like for example that you got no
    result at all. But as I said before its's best to
    keep the nulls at the perimeter of your design. Don't
    let nulls slip through.As I said, I do a lot of database work. And it does mean something specific. Thus (in conclusion) that means that, in "general", I use null most of the time.
    Exactly what part of that didn't you follow?
    And exactly what sort of value do you use for a Date when it is undefined? What non-null value do you use such that your users do not have to write exactly the same code that they would to check for null anyways?

  • Best practice for load balancing on SA540

    Are there some 'best practice' guide to configure out load balancing on SA540 .?
    I've got 2 ADSL lines and would like device to auto manage outgoing traffic .. Any idea ?
    Regards

    Hi,
    SA500 today implements flow based round robin load balancing scheme.
    In the case of two WAN link (over ADSL), by default, the traffic should be "roughly" equally distributed.
    So in general, users should have no need to configure anything further for load balancing.
    The SA500 also supports protocol binding (~PBR) over WAN links. This mechanism offers more control on how traffic can flow.
    For example, if you have 1 ADSL with higher throughput than the other ADSL link offers, you can consider to bind bandwidth-hungry app on the WAN link connecting to the higher ADSL link and the less bandwidth-hungary app on the other one. The other traffic can continue to do round robin.  This way you won't saturate the low bandwidth link and give users better application experiences.
    Regards,
    Richard

  • Best-practice for use of object styles to manage image text wrap issues when aiming at both print and EPUB output?

    I have a work-flow question about object styles, text-wrap, and preparing a long document with lots of images for dual print/EPUB output in InDesign CC 2014.
    I am sort of experienced with InDesign but new to EPUB export. I have hundreds of pages and hundreds of images so I'd like to make my EPUB learning curve, in particular, less painful.
    Let me talk you through what I'm planning and you tell me if it's stupid.
    It's kind of a storybook-look I'm going for. Single column of text (6" by 9" page) with lots of small-to-medium images on the page (one or two images per page), and the text flowing around, sometimes right, sometimes left. Sometimes around the bounding box, sometimes following the edges of the images. So in each case I'm looking to tweak image size and placement and wrap settings so that the image is as close to the relevant text as possible and the layout isn't all wonky. Lovely print page the goal. Lots of fussy trade-offs and deciding what looks best. Inevitably, this will entail local overrides of paragraph styles. So what I want to do, I guess, is get the images as closely placed as possible, before I do any of that overriding. Then I divide my production line.
    1) I set aside the uniformly-styled doc for later EPUB export. (This is wise, right? Start for EPUB export with a doc with pristine styles?)
    2) With the EPUB-bound version set aside, I finish preparing the print side, making all my little tweaks. So many pages, so many images. So many little nudges. If I go back and nudge something at the beginning everything shifts a little. It's broken up into lots of separate stories, but still ... there is no way to make this non-tedious. But what is best practice? I'm basically just doing it by hand, eyeballing it and dropping an inline anchor to some close bit of text in case of some storm, i.e. if there's a major text change my image will still be almost where it belongs. Try to get the early bits right so that I don't have to go back and change them and then mess up stuff later. Object styles don't really help me with that. Do they? I haven't found a good use for them at this stage (Obviously if I had to draw a pink line around each image, or whatever, I'd use object styles for that.)
    Now let me shift back to EPUB. Clearly I need object styles to prepare for export. I'm planning to make a left float style and a right float style and a couple of others for other cases. And I'm basically going to go through the whole doc selecting each image and styling it in whatever way seems likeliest. At this point I will change the inline anchors to above line or custom, since I'm told EPUB doesn't like the inline ones.
    I guess maybe it comes down to this. I realize I have to use object styles for images for EPUB, but for print, manual placement - to make it look just right - and an inline anchor seems best? I sort of feel like if I'm going to bother to use object styles for EPUB I should also use them for print, but maybe that's just not necessary? It feels inefficient to make so many inline anchors and then trade them for a custom thing just for EPUB. But two different outputs means two different workflows. Sometimes you just have to do it twice.
    Does this make sense? What am I missing, before I waste dozens of hours doing it wrong?

    I've moved your question to the InDesign EPUB forum for best results.

Maybe you are looking for

  • Logic Pro X - Disk is too slow or System Overload. (-10010)

    Hi there, I'm a professional music user and recently switched from Pro-Tools to Logic due to PT11's lack of third party plug-in support at the moment. I was looking forward to leaving behind the error messages of Pro-Tools for the supposed plain sail

  • Downloaded "Yes Man"but it has not shown up

    I downloaded "Yes Man"the film from itunes however it does not say if it is downloading and there is no purchase history neither:S.

  • Does anyone know a USB WIFI connection that works with the Intel Powerbook and OS 10.7.5

    I have replaced my AIRPORT EXTREME card because it dropped connection and wouldn't turn back on. It worked fine for 14 months, but now it is broken again. I have given up. I would rather replace a 20-30 dongle everytime rather than the airportexpress

  • How do you make an MF4770 all in one printer print?

    I just got an MF 4770 and all I see is copy, fax and scan on the front. How do I get it to print? Also, how do I choose a printer when I don't see an icon in the dock?  Where do I choose it from. I have already dowloaded and installed the two drivers

  • Internet Problem

    Hello, I am new to Solaris. I installed Solaris 10 8/07; Have Dell 4600 with P4, Intel pro 100 Lan+Modem 56 Cardbus II, and Intel Pro/100 VE Network connection; Have Cable Modem(using ethernet) but can't connect to internet; could it possibly be ethe