JHeadstart and multiples developers -best practices-

Hi,
Which is the best way to use JHeadstart when you have a team of several programmers?
We’ve seen that several examples have only one “Application Structure File”, therefore it is quite difficult that a group of programmers can work in parallel.
Which is the best way to handle this situation? Several “Application Structure Files”?
Our project will have a team of 4 programmers and certainly we want that they work efficiently with this tool.
Any thought?
Best regards,

Michael et al,
I am working on a big ADF project where we do the following:
- one big Model project with all business component objects, but we organize the EO's, VO's and AM's in separate packages, and for the VO's, we make sub-packages per functional subsystem
- several ViewController projects for each functional subsystem, each having its own struts-config-<subsystemname> file.
- within a ViewController project, we sometimes have multiple Application Structure files generating into the same struts-config file.
Please see this thread for more info on config management tools:
Re: Jheadstart, JDeveloper and CVS and TeamWork
Steven Davelaar,
JHeadstart Team.

Similar Messages

  • Integrating APEX and E-Business best practices?

    My company has been using the E-Business Suite for the past decade, and I've finally convinced them to set up APEX (now that 4.0 is out). However, I haven't been able to find much that's current on integrating the two, other than [this paper|http://www.oracle.com/technology/products/database/application_express/pdf/apex_ebs_wp_cabot_consulting.pdf] by Cabot Consulting. Is this the accepted best practice for integrating the two systems?
    Our DBA is hesitant about letting us use the apps schema as the parsing schema (per the document's recommendation), as that gives APEX developers almost unlimited access ("the keys to the kingdom", in his words). Is there a better way, or something I can tell him to allay his concerns?
    Thanks,
    -David

    Hi David,
    You don't have to use APPS as the parsing schema, you can use another schema and grant execute on an EBS authenticate function that you have created in the APPS schema. This will work fine. You will also have to grant access to any other APPS objects that you need to use.
    Apart from the bit of extra work of granting the required privileges the downside of not using the APPS schema as the parsing schema is that not all the EBS APIs will work if run from another schema, most will but some won't because they use dynamic SQL which expects the parsing schema to be APPS.
    This is the reason why most EBS developers do all their work in the APPS schema e.g. when developing conc request programs. Which ever approach you take you need to ensure that APEX is clamped down in production with no developer access.
    So it all depends on what you are hoping to achieve and how you design your APEX applications. If you not calling EBS APIs you will be fine with a separate schema.
    Similarily with the performance, you may be fine on the same server, for example, if you have a separate RAC node for Apex or are using separate tables/indexes. On the other had if you are try to use the APPS security views for reporting in APEX you will quickly run into performance issues.
    Rod West

  • SOA 11g  Composite Deployment across multiple Instances: Best Practice

    Hi,
    We have a requirement where we need to deploy the composite acrocss mutilple instances(DEV,TEST,Production) without JDEV.
    We are using SOA11.1.3.3(cluster) and linux OS.
    Pls suggest me what is the best practice to deploy the SOA composite.
    Thanks,
    AB

    Why there are different ways to deploy the composite in different environment? Depending upon the environment, it's business importance increases and hence access to developers get more restricted and hence there are many ways for deploying. If you are developing an application, you would not like to export a SAR and then login to EM console to deploy it. For a developer it is always very convenient to use IDE itself for deployment and hence JDev is preferably used for Dev instances for deployment.
    Once development finishes, developer will check in the artifacts into version control system and if you want to deploy to a test instance then you have to check out the stable version, compile it and package it and then only you can deploy. Hence for test instances, ANT is the preferable mode for deployment. (Remember that developers may not be able to connect to Test environment directly and hence deployment from JDev is not possible)
    Once a configuration gets tested in Test env, it's SAR should be checked in into version control system, so that it would not be required to recompile and repackaging of it. This will also make sure that any artifact which is not tested on Test instance, must not go to higher instances (like UAT/PROD). Now in Pre-Prod/UAT/Prod, you may simply access the EM console and deploy the SAR which is already tested in Test instance. Remember that there will be very limited access to such critical environments and hence using Role based access of EM it would be very easy to deploy the resources. Moreover, it is more secure mode of deployment because only those users which have appropriate priviledge, will be able to deploy, and it would also be easier to track the changes.
    What is the proc and cons if we use only one way to deploy the composite across the Instances...As such there is no major pros and cons. You may use EM for all environments but it may be a little discomfort for developers/deployers of test environment. You may also use ANT for all the environments but it is not suggested until and unless you have a very good and secure process in place for deployments.
    Regards,
    Anuj

  • Project Server / Project Pro = Cut and Paste official best practices to avoid corruption

    It is pretty common knowledge among Project Server veterans that Cut and Paste between Project Pro files can cause corruption with the assignment table. This leads to Publish and Reporting (Project Publish)
    failures. I just went through an upgrade from Project Server 2007 to 2013 and a LOT of files had to be rebuilt from scratch. 
    I ask my users to not Cut and Paste within or between project files. They do not always listen.
    I have looked and cannot find official Microsoft text on the subject or at least some sort of official best practices doc. It would help me prevent some corruption if I had a doc with a Microsoft logo that says “It is not recommended….”
    Or “If you do this you will make your admin cry…”
    If this doc is right in front of me and I just missed it I apologize in advance.
    Thanks

    John --
    To the best of my knowledge, there is no such Best Practice document from Microsoft.  Based on my experience with Project Server from Project Central through Project Server 2013, following are some items that can cause corruption in your project:
    Use of special characters in the name of the project.
    Cutting/copying/pasting entire task rows within a single enterprise project or between multiple enterprise projects.
    Opening an enterprise project on your laptop while connected to a wireless network, then closing the laptop lid, walking to another room in the building when you pass from one wireless access point to another, and then opening the laptop lid again. 
    I learned this from one of our former clients.
    Beyond this, there are some other items that will not corrupt your project, but which will cause problems with saving or publishing a project.  These items include:
    Using blank rows in the project.
    Using special characters in task names.
    Task names that start with a space character (this sometimes happens when copying and pasting from an Excel workbook).
    In both Project Server 2010 and 2013, the recommended method for resolving a corrupted project is to use the Save for Sharing feature. Refer to my directions in the following thread on how to use the Save for Sharing feature:
    http://social.technet.microsoft.com/Forums/projectserver/en-US/ec2c5135-f997-40e1-8fe1-80466f86d373/task-default-changing?forum=projectprofessional2010general
    Beyond what guidance I have provided, I am hoping other members of this community will add their thoughts as well.  Hope this helps.
    Dale A. Howard [MVP]

  • OBIEE Report and Dashboard development best practice

    Hi All,
    Is there any best practice available on OBIEE report and dashboard development? Any help would be appreciated.
    Thanks,
    RK

    http://forums.oracle.com/forums/thread.jspa?messageID=2718365
    this might help you
    Thanks

  • EFP and Intranet Portal - Best Practices for architecture

    Hi All,
       We are planning to create a portal for our partners  (b2b scenario). This portal will provide anonymous  and user specific access. We also have a intranet portal with ESS and MSS.  I have some doubts around implementing EFP and Intranet portal.
       1) What is the best practice solution architecture around portal instances? Is it suggested to implement both intranet and internal portals on same portal instance? Do you implement them as 2 seperate instances (and 2 seperate boxes) one for intranet and another for internet?
       2) There are couple of functions shared between inranet and internet portals. Has any one attempted   a FPN connection between a internet portal and intranet portal?
       3)  What is the best practices around an external portal connecting to ECC directly?
    Any suggestions are greatly appreciated.

    Hi Pallayya,
             We are implementing a similar kind of thing for our client.I can explain that so you find some glimpse out of it.
    We have decided twp ways.
    1) We are having one instance of the portal that will be accessed both in Intranet & Internet.We are using two different URL for intranet & internet.We have a reverse proxy in the picture also,coz it will increase the security of the portal in the Internet.The reverse proxy will be available in the DMZ Zone.We have integrated ECC & BW with the portal and some web dynpro application(For intranet only).
    Now the user comes from the Internet will hit the reverse proxy first->then it will go to EP Server->in EP it will be decided which kind of request client is asking ->
    If it is a ECC request->Portal Server sends back the request to Reverse Proxy->RP will send it to web dispatcher->Then it will go to ECC.
    If it is a BW request ->Portal server sends the request to BW Server.
    2) There will be two different URLS for the portal in Intranet & Internet
         For that we used two reverse proxy
    regards
    Indranil

  • Multiple Libraries - Best practice

    I have been using iphoto for the last 5 years, At the end of the year I move the library to an external drive and use iphoto buddy to point to it. Now I have 6-7 libraries and I would rather have everything, together.
    What has people found out is the best way to combine libraries so that all files end up in 1 directory on the firewire drive and then the best way to move this years photos from the hardrive to the ext drive, later? Or is having multiple libraries the best?
    Seems like i am always moving files to and from libraries and i loss any structure and backing it up is a pain. am i missing something here? Thanks in advance
    jf

    Jeff:
    Welcome to the Apple Discussions. The only way at present to merger multiple libraries into one library and retain keywords, rolls, etc, is to use the paid version of iPhoto Library Manager. That will get you one library. If that library gets very large then you might have a storage problem on your boot drive where you want to keep adequate free space for optimal performance. In that case multiple libraries might be best for you.
    FWIW here's my photo strategy. I keep my photos in folder based on the shoot (each upload from the camera after an event, usually family get togethers) in folders dated for the event and a brief description. I then import that folder into iPhoto to get a roll with the same name. Since I take the step to upload before importing to iPhoto I also batch rename the files with the international date format: YYYY-MM-DD-01.jpg and so forth.
    With iPhoto's new capability of using only aliases for the original/source files (I kept my original folders as a backup on a second drive) I used those original source folders for my combined library by importing those folders into a new V6 library using the alias method. So, for 17,400+ photos that take up 27G on my second drive, my iPhoto library on my boot drive only takes up 8G. This is very valuable for PB owners who can't afford that much library space on the boot drive. The one downside to the alias method is that if you delete a photo from the library the source/original file does not get deleted. Hopefully that will be in the next update.
    If the external is dismounted the library is still useable in that you can view thumbnails, create, delete and move albums, add comments and keywords (but with some effort). You can't do anything that requires moving the thumbnails or edit. I've created a workflow to convert from a conventional iPhoto library structure to an alias based one.
    The downside to converting as I mentioned is that you will lose keywords, albums, etc. due to the conversion. So it's not for everyone.
    Bottom line: for combining/merging libraries iPhoto Library Manager is the only option at the present time. Hope this has bee of some help to you.

  • _h and _v Content, Best Practice

    Hello
    while i am working on the same content of an article as _h and _v, i noticed in _h i added 7 pages while in the _v there are 6 pages,
    and i relised this is not good to get the perfect result.
    so i add one more page in the _v, and started to change the size of the images so the contents goes to page 7.
    i was able to do that, and then i noticed that the 14th subtitle in the _h is in the 5th page and the same subtitle in the _v is in the 6the page.
    is this normal? or do i have to math the content so the reader when rotatiting will get the same result so he can folow reading.
    regards

    Hi Greg:
    To access the Best Practices using Business Content  do the following:
    1.-Open this URL: http://help.sap.com/bp_bw370/html/index.htm
    2.-Click on the "Preconfigured Scenarios" link.
    3.-Click on the Links to navigate on the different available scenarios and to see the documents you can download (Scenario Documentation, Scenario Installation Guide and Building Blocks).
    *Financials
    -- Financial Accounting Analysis
    -- Controlling Analysis
    -- CO-PA Analysis
    -- Cost Center Planning
    -- Reporting Financials EhP3
    *Customer Relationship Management
    -- Sales Analysis
    -- Cross-Functional Analysis: Financial and Sales Data
    -- Booking Billing Backlog Analysis
    -- Sales Planning
    -- Scheduling Agreements Analysis
    -- CRM Analytics
    *Supply Chain Management
    -- Purchasing Analysis
    -- Manufacturing Analysis
    -- Inventory Analysis
    -- Demand Planning Analysis
    -- Resource and Operation Data Analysis
    *Product Lifecycle Management
    -- Project System - Controlling and Dates
    *Human Capital Management
    -- Cross-Application Time Sheet
    -- Time Management - Time and Labor
    -- Personnel Development - Qualifications
    -- Travel Management - Travel Expenses
    *General
    -- Data Mining - ABC Classification
    Regards,
    Francisco Milán.

  • Resource Bundles and Embedded fonts - best practice

    Hello,
    I am in digging into creating a localized app and I would
    like to use embedded fonts.
    Ideally, I would like to have two locales (for example), each
    with a different embedded font and/or unicode range.
    For example, how do I set up my localized app so that EN uses
    Arial (Latin Range), and JP uses Arial Unicode MS with Japanese
    Kanji unicode range ?
    Note that I do know how to embed fonts with different ranges,
    I just don't know how to properly embed them into resource bundles
    and access them easily in style sheets.
    Thanks!
    -Daniel

    Hello!
    This is a very pertinent question, however as many things in life there is no one size fits all here.
    We basically recommend, as best practice, to allocate for each specific context only the estimated needed resources. These values should always come from a previous study on the network patterns/load.
    To accomodate for growth and scalability it is strongly advised to initially keep as many resources reserved as possible and allocate the unused resources as needed. To accomplish this goal, you should created a reserved resource class, as you did already, with a guarantee of 20 to 40 percent of all ACE resources and configure a virtual context solely with the purpose of ensuring that these resources are reserved.
    As you might already know ACE protects resources in use, this means that when decreasing a context's resources, the resources must be unused before then can be reused by other context. Although it is possible to decrease the resource allocations in real time, it typically requires additional overhead to clear any used resources before reducing them.
    Based on the traffic patterns, number of connections, throughput, concurrent SSL connections , etc, for each of the sites you will be deploying you will have a better idea on what might be the estimated needed resources and then assign them to each of the contexts. Thus this is something that greatly depends on customer's network environment.
    Hope this helps to clarify your doubts.

  • Embedding same images across multiple components, best practices?

    Hi Guys,
    Once again a great forum.
    I have a fairly large dashboard application, built from many
    components, from an obvious maintenance point of view this is
    preferred.
    To keep the look and feel the same across each component I
    reuse a lot of the icons, this icons have 3 states, disabled,
    normal and hot(rollover)
    The mouse roll over and roll out are the same for each set of
    icons. At the moment I have each component embed each icon and
    repeat the functions.
    This is not the optimum way to do this so I'm looking on best
    practices. Should I build a separate action script package with all
    the embedded images and roll over/out functions. Then import that
    in to each component. But then I will embedded ALL the images in
    each component. Should I embed all the images in the main container
    app, then when flex sees the same image in a compoent will it not
    embedded again or will it embed the images again..
    In the example below 4 of the components have the same
    refresh icon and roll/over states. So this code is repeated (bad
    practice) in all 4. Moving to separate action-script package will
    make maintenance easier, but as stated above will just one copy get
    embedded for the entire app, or will 4 copies get embedded ?
    [Embed(source="images/refresh_24.png")]
    [Bindable]
    private var refreshIcon:Class;
    [Embed(source="images/refresh_24_hot.png")]
    [Bindable]
    private var refreshIconHot:Class;
    [Embed(source="images/refresh_24_dis.png")]
    [Bindable]
    private var refreshIconDis:Class;
    private function rollOverRefresh(event:Event):void {
    if (event.target.source != refreshIconDis )
    {event.target.source = refreshIconHot;}
    private function rollOutRefresh(event:Event):void {
    If (event.target.source != refreshIconDis )
    {event.target.source = refreshIcon;}
    }

    Flex is able to collate those Embeds so they are only
    included once IIRC.
    While it may seem like bad practice to include it in each
    component, it really isn't, from a code reusability standpoint. But
    you could probably continue on that path and create a custom
    component <mycontrols:RefreshButton> that has the the embeds
    in one place for you.
    You could probably also do something similar with style very
    easily:
    .MyButton {
    overSkin="@Embed(source='../assets/orb_over_skin.gif')"
    upSkin="@Embed(source='../assets/orb_up_skin.gif')"
    downSkin="@Embed(source='../assets/orb_down_skin.gif')"/>
    Then apply that style to your button.
    You could also put the button into a single SWF file in Flash
    and include it that way to reduce the number of embeds. I never
    include PNG, JPG, GIF, etc files directly, always SWF as you get
    better compression that way IMO. Plus I just find it gives me
    greater flexibility...if I want to animate my skins in the future
    (button that gleams for instance), I already have SWF's in my code
    so no need to change anything out but the SWF.

  • DW:101 Question - Site folder and file naming - best practices

    OK - My 1st post! I’m new to DW and fairly new to developing websites (have done a couple in FrontPage and a couple in SiteGrinder), Although not new at all to technical concepts building PCs, figuring out etc.
    For websites, I know I have a lot to learn and I'll do my best to look for answers, RTFM and all that before I post. I even purchased a few months of access to lynda.com for technical reference.
    So no more introduction. I did some research (and I kind of already knew) that for file names and folder names: no spaces, just dashes or underscores, don’t start with a number, keep the names short, so special characters.
    I’ve noticed in some of the example sites in the training I’m looking at that some folders start with an underscore and some don’t. And some start with a capital letter and some don’t.
    So the question is - what is the best practice for naming files – and especially folders. And that’s the best way to organize the files in the folders? For example, all the .css files in a folder called ‘css’ or ‘_css’.
    While I’m asking, are there any other things along the lines of just starting out I should be looking at? (If this is way to general a question, I understand).
    Thanks…
    \Dave
    www.beacondigitalvideo.com
    By the way I built this site from a template – (modified quite a bit) in 2004 with FrontPage. I know it needs a re-design but I have to say, we get about 80% of our video conversion business from this site.

    So the question is - what is the best practice for naming files – and especially folders. And that’s the best way to organize the files in the folders? For example, all the .css files in a folder called ‘css’ or ‘_css’.
    For me, best practice is always the nomenclature and structure that makes most sense to you, your way of thinking and your workflow.
    Logical and hierarchical always helps me.
    Beyond that:
    Some seem to use _css rather than css because (I guess) those file/folder names rise to the top in an alphabetical sort. Or perhaos they're used to that from a programming environment.
    Some use CamelCase, some use all lowercase or special_characters to separate words.
    Some work with CMSes or in team environments which have agreed schemes.

  • Setting Disks/Caches/Vault for multiple projects - Best Practices

    Please confirm a couple assumptions for me:
    1. Because Scratch Disk, Cache and Autosave preferences are all contained in System Settings, I cannot choose different settings for different projects at the same time (i.e. I have to change the settings upon launch of a new project, if I desire a change).
    2. It is good practice to set the Video/Render Disks to an external drive, and keep the Cache and Autosave Vault set to the primary drive (e.g. user:Documents:FCP Documents). It is also best practice to save the Project File to your primary drive.
    And a question: I see that the Autosave Vault distinguishes between projects, and the Waveform Cache Files distinguishes between clips. But what happens in the Thumbnail Cache Files folder when you have more than one project targeting that folder? Does it lump it into the same file? Overwrite it? Is that something about which I should be concerned?
    Thanks!

    maxwell wrote:
    Please confirm a couple assumptions for me:
    1. Because Scratch Disk, Cache and Autosave preferences are all contained in System Settings, I cannot choose different settings for different projects at the same time (i.e. I have to change the settings upon launch of a new project, if I desire a change).
    Yes
    2. It is good practice to set the Video/Render Disks to an external drive, and keep the Cache and Autosave Vault set to the primary drive (e.g. user:Documents:FCP Documents).
    Yes
    It is also best practice to save the Project File to your primary drive.
    I don't. And I don't think it matters. But you should back that file up to some other drive (like Time Machine).
    And a question: I see that the Autosave Vault distinguishes between projects, and the Waveform Cache Files distinguishes between clips. But what happens in the Thumbnail Cache Files folder when you have more than one project targeting that folder? Does it lump it into the same file? Overwrite it? Is that something about which I should be concerned?
    I wouldn't worry about it.
    o| TOnyTOny |o

  • Integrating Multiple systems - best practice

    Hi,
    I need to integrate the following scenario let me know the best practice with steps.
    Scenario is end to end syncrhonous
    A system(supports open tech i.e webservice client) => SAP PI<=>Oracle(call stored procedure) => Update in ECC=> Reponse back to A system (source)
    Thanks
    My3

    Hi Mythree,
    First get the request from the web service to pi and then map to stored procedure using synchronous send step (Sync Send1) in bpm and get the response back. Once when you get the response back from oracle then use another synchronus send step. Here take the response from the database and map to ecc (use either a proxy or rfc) which is Sync Send2 in the bpm and get the udpdates back. Once when you get the response back send back the response to source system. The steps in BPM would be like this:
    Start --> Receive --> Sync Send1 --> Sync Send2 --> Send --> Stop
    These blogs might be useful for this integration:
    /people/siva.maranani/blog/2005/05/21/jdbc-stored-procedures
    /people/luis.melgar/blog/2008/05/13/synchronous-soap-to-jdbc--end-to-end-walkthrough
    Regards,
    ---Satish

  • SAP SCM and SAP APO: Best practices, tips and recommendations

    Hi,
    I have been gathering useful information about SAP SCM and SAP APO (e.g., advanced supply chain planning, master data and transaction data for advanced planning, demand planning, cross-plant planning, production planning and detailed scheduling, deployment, global available-to-promise (global ATP), CIF (core interface), SAP APO DP planning tools (macros, statistical forecasting, lifecycle planning, data realignment, data upload into the planning area, mass processing u2013 background jobs, process chains, aggregation and disaggregation), and PP/DS heuristics for production planning).
    I am especially interested about best practices, tips and recommendations for using and developing SAP SCM and SAP APO. For example, [CIF Tips and Tricks Version 3.1.1|https://service.sap.com/form/sapnet?_FRAME=CONTAINER&_OBJECT=011000358700006480652001E] and [CIF Tips and Tricks Version 4.0|https://service.sap.com/form/sapnet?_FRAME=CONTAINER&_OBJECT=011000358700000596412005E] contain pretty useful knowledge about CIF.
    If you know any useful best practices, tips and recommendations for using and developing SAP SCM and SAP APO, I would appreciate if you could share those assets with me.
    Thanks in advance of your help.
    Regards,
    Jarmo Tuominen

    Hi Jarmo,
    Apart from what DB has suggested. you should give a good reading on the following.
    -Consulting Notes (use the application component filters in search notes)
    -Collective Notes (similar to the one above)
    -Release Notes
    -Release Restrictions
    -If $$ permit subscribe to www.scmexpertonline.com. Good perspective on concepts around SAP SCM.
    -There are a couple of blogs (e.g. www.apolemia.com) .. but all lack breadth.. some topics in depth.
    -"Articles" section on this site (not all are classified well.. see in ECCops, mfg, SCM, Logistics etc)
    -Serivce.sap.com- check the solution details overview in knowledge exchange tab. There are product presentations and collaterals for every release. Good breadth but no depth.
    -Building Blocks - available for all application areas. This is limited to vanilla configuration of just making a process work and nothing more than that.
    -Get the book "Sales and Operations Planning with SAP APO" by SAP Press. Its got plenty of  easy to follow stuff, good perspective and lots of screen shots to make life easier.
    -help.sap.com the last thing that most refer after all "handy" options (incl. this forum) are exhausted. Nevertheless, this is the superset of all "secondary" documents. But the maze of hyperlinks that start at APO might lead you to something like xml schema.
    Key Tip: Appreciate that SAP SCM is largely driven by connected execution systems (SAP ECC/ERP). So the best place to start with should be a good overview of ERP OPS solution overview, at least at the significant level of depth.). Check this document at sdn wiki "ERP ops architecture overview".
    I have some good collection of documents though many i havent read myself. If you need them let me know.
    Regards,
    Loknath

  • Handling Error queue and Resolving issues - Best practices

    This is related to Oracle 11g R2 streams replication.
    I have a table A in Source and Table B in destination. During the replication by Oracle streams, if there is an error at apply process, the LCR record will go to error queue table.
    Please share the best practices on the following:
    1. Maintaining error table operation,
    2. monitor the errors in such a way that other LCRs are not affected,
    3. Reapply the resolved LCRs from error queue back to Table B, and
    4. Retain the synchronization without degrading the performance of streams
    Please share some real time insight into the error queue handling mechanism.
    Appreciate your help in advance.

    This is related to Oracle 11g R2 streams replication.
    I have a table A in Source and Table B in destination. During the replication by Oracle streams, if there is an error at apply process, the LCR record will go to error queue table.
    Please share the best practices on the following:
    1. Maintaining error table operation,
    2. monitor the errors in such a way that other LCRs are not affected,
    3. Reapply the resolved LCRs from error queue back to Table B, and
    4. Retain the synchronization without degrading the performance of streams
    Please share some real time insight into the error queue handling mechanism.
    Appreciate your help in advance.

Maybe you are looking for

  • Lost Connections

    Hi I am using a WRT110, 1 wired desktop, 2 wireless laptop a Lenovo T61 laptop, which is the one I am having an issue. Every time I reboot or shutdown this laptop, when I restart again, all other connections are lost, including this laptop, once I go

  • How do I get iMessage to work on my unactivated iPhone?

    I have a sprint iPhone 4. And I haven't activated it yet. I want to use iMessage but its stuck on 'Waiting for activation' and 'verifying on my number'  how do I get it to work? It works, if I receive a iMessage I can reply back, but I want to send o

  • Sum of Elements in Data Template

    Dear Members, I am developing a report using Oracle BI Publisher 10.1.3. We are on E-Business Suite R12. I am using Data Template to Generate xml. Following is my element structure in Data Template: <dataTrigger name="afterParameterFormTrigger" sourc

  • Executing Cygwin through JAVA

    Hey guys, I am having a bit of a problem here trying to figure out how to execute cygwin and run cygwin commands through java. So far I have this code below; I ultimately want to make cygwin run an egrep command, but for now I am trying ls -la as a t

  • Searching on android not good

    Why searching feature on android not like that on IOS apple ( which is better , and also in windows OS) on android alot of small pages appear , and can't stop searching , and wasting time. is there any possibility to add searching preferences , or to