Small data aggregation question

Starting with this data:
select * from things
NAME                 TYPE                 THING               
dave                 item                 can                 
mike                 item                 box                 
mike                 consumer elec        television          
mike                 consumer elec        radio               
mike                 automobile           volvo               
ryan                 automobile           saab                
ryan                 automobile           chevrolet           
mike                 automobile           volvo               
mike                 automobile           volvo               
mike                 automobile           volvo               
mike                 consumer elec        radio               
mike                 consumer elec        radio               
mike                 consumer elec        radio               
13 rows selectedI have successfully constructed this query, which almost gets me where I want to be:
    select  name,
            ltrim(max(sys_connect_by_path(thing,','))
            keep(dense_rank last order by curr),',') as things
      from  (select
               name,
               thing,
               row_number() over (partition by name order by thing) as curr,
               row_number() over (partition by name order by thing) -1 as prev
             from things)
  group by  name
connect by  prev = prior curr
       and  name = prior name
start with  curr = 1;
NAME                 THINGS                                                       
dave                 can                                                          
mike                 box,radio,radio,radio,radio,television,volvo,volvo,volvo,volvo
ryan                 chevrolet,saab                                               
3 rows selectedwhat I want (hope for, rather) is this:
NAME                 THINGS                                                       
dave                 can                                                          
mike                 box,radio(4),television,volvo(4)
ryan                 chevrolet,saab                                                Can anyone give me some clues to help me get what I want?
Thanks!

Just aggregate your data:
SQL> with things as(
  2  select 'dave' name,'item' type,'can' thing from dual union all
  3  select 'mike','item','box' from dual union all
  4  select 'mike','consumer elec','television' from dual union all
  5  select 'mike','consumer elec','radio' from dual union all
  6  select 'mike','automobile','volvo' from dual union all
  7  select 'ryan','automobile','saab' from dual union all
  8  select 'ryan','automobile','chevrolet' from dual union all
  9  select 'mike','automobile','volvo' from dual union all
10  select 'mike','automobile','volvo' from dual union all
11  select 'mike','automobile','volvo' from dual union all
12  select 'mike','consumer elec','radio' from dual union all
13  select 'mike','consumer elec','radio' from dual union all
14  select 'mike','consumer elec','radio' from dual)
15  -- Test data
16  select     name,
17                                  ltrim(max(sys_connect_by_path(thing||decode(cnt,1,null,'('||cnt||')'),','))
18                                  keep(dense_rank last order by curr),',') as things
19          from  (select
20                                           name,
21                                           thing,
22                                           count(*) cnt,
23                                           row_number() over (partition by name order by thing) as curr,
24                                           row_number() over (partition by name order by thing) -1 as prev
25                                   from things group by name,type,thing)
26  group by  name
27  connect by  prev = prior curr
28         and  name = prior name
29  start with  curr = 1
30  /
NAME            THINGS
dave            can
mike            box,radio(4),television,volvo(4)
ryan            chevrolet,saabBest regards
Maxim

Similar Messages

  • Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).

    Hi,
    Our Environment is Essbase 11.1.2.2 and working on Essbase EAS and Shared Services components.One of our user tried to run the Cal Script of one Application and faced this error.
    Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).
    I have done some Google and found that we need to add something in Essbase.cfg file like below.
    1012704 Dynamic Calc processor cannot lock more than number ESM blocks during the calculation, please increase CalcLockBlock setting and then retry (a small data cache setting could also cause this problem, please check the data cache size setting).
    Possible Problems
    Analytic Services could not lock enough blocks to perform the calculation.
    Possible Solutions
    Increase the number of blocks that Analytic Services can allocate for a calculation:
    Set the maximum number of blocks that Analytic Services can allocate to at least 500. 
    If you do not have an $ARBORPATH/bin/essbase.cfg file on the server computer, create one using a text editor.
    In the essbase.cfg file on the server computer, set CALCLOCKBLOCKHIGH to 500.
    Stop and restart Analytic Server.
    Add the SET LOCKBLOCK HIGH command to the beginning of the calculation script.
    Set the data cache large enough to hold all the blocks specified in the CALCLOCKBLOCKHIGH setting. 
    Determine the block size.
    Set the data catche size.
    Actually in our Server Config file(essbase.cfg) we dont have below data  added.
    CalcLockBlockHigh 2000
    CalcLockBlockDefault 200
    CalcLockBlocklow 50
    So my doubt is if we edit the Essbase.cfg file and add the above settings and restart the services will it work?  and if so why should we change the Server config file if the problem is with one application Cal Script. Please guide me how to proceed.
    Regards,
    Naveen

    Your calculation needs to hold more blocks in memory than your current set up allows.
    From the docs (quoting so I don't have to write it, not to be a smarta***:
    CALCLOCKBLOCK specifies the number of blocks that can be fixed at each level of the SET LOCKBLOCK HIGH | DEFAULT | LOW calculation script command.
    When a block is calculated, Essbase fixes (gets addressability to) the block along with the blocks containing its children. Essbase calculates the block and then releases it along with the blocks containing its children. By default, Essbase allows up to 100 blocks to be fixed concurrently when calculating a block. This is sufficient for most database calculations. However, you may want to set a number higher than 100 if you are consolidating very large numbers of children in a formula calculation. This ensures that Essbase can fix all the required blocks when calculating a data block and that performance will not be impaired.
    Example
    If the essbase.cfg file contains the following settings:
    CALCLOCKBLOCKHIGH 500  CALCLOCKBLOCKDEFAULT 200  CALCLOCKBLOCKLOW 50 
    then you can use the following SET LOCKBLOCK setting commands in a calculation script:
    SET LOCKBLOCK HIGH; 
    means that Essbase can fix up to 500 data blocks when calculating one block.
    Support doc is saying to change your config file so those settings can be made available for any calc script to use.
    On a side note, if this was working previously and now isn't then it is worth investigating if this is simply due to standard growth or a recent change that has made an unexpected significant impact.

  • Is there any documentation which throws light on how data aggregation happens in data warehouse grooming? what algorithm exactly it follows in different aggregation type (raw, hourly, daily)?

    Is there any documentation which throws light on how data aggregation happens in data warehouse grooming? what algorithm exactly it follows in different aggregation type (raw, hourly, daily)?
    How exactly it picks up a specific data value during Hourly aggregations and Daily aggregations?As in  How the value is chosen. Does it say averages out or simply picks  value at the start of the hour/day or end of the hour/day ??

    I'll try one more time. :)
    Views in the operations console are derived from data in the operational database. This is always raw data, and typically does not go back more than 7 days.
    Reports get data from the data warehouse. Unless you create a custom report that uses raw data, you will never see raw data in a report - Microsoft and probably all 3rd party vendors do not develop reports that fetch raw data.
    Reports use aggregated data - hourly and daily. The data is aggregated by min, max, and avg sample for that particular aggregation. If it's hourly data, then you will see the min, max, and avg for that entire hour. Same goes for daily - you will see the
    min, max, and avg data sample for that entire day.
    And to try clarifying even more, the values you see plotted on the report are avg samples. If you drill into the performance detail report, then you can see the min, max, and avg samples, as well as standard deviation (which is calculated based on these
    three values).
    Jonathan Almquist | SCOMskills, LLC (http://scomskills.com)

  • Create a small data base stored in a file

    I want to create a small data base of about 10 to 20 items. Each item will have an item number, serial number, time stamp and description. I want to store these 10 items in a file and be able to retrieve them by item number. How can I do this with LabView?

    For something this small, use a single VI to manage it in memory:
    It has a FUNCTION input, an enum with values of (INIT, READ FILE, WRITE FILE, ADD ITEM, FIND ITEM).
    It has a CLUSTER input, which is your record type {Item number, serial number, time stamp, description}
    It has a CLUSTER output, of the same type.
    It has an ITEM NUMBER input, which is an integer (assuming your item number is truly a number).
    The code is a WHILE loop with the CONTINUE input wired to FALSE (it never loops).
    Inside the WHILE LOOP is a CASE statement, with the selector wired to the FUNCTION control.
    For case INIT, make an empty array of records (your cluster type) and feed it to a shift register on the WHILE loop.
    For case WRITE FILE, take the shift regis
    ter input and CREATE, WRITE, and CLOSE a file. (pass it thru to the output as well). Wire the cluster to the DATALOG TYPE of the CREATE FILE function to create a datalog file.
    For case READ FILE, use OPEN FILE, READ FILE, and CLOSE FILE functions, with DATALOG TYPE wired to the cluster type.
    For case ADD ITEM, just append the new item (input cluster control) to the array from the shift reg and put the array back in the shift reg.
    For case FIND ITEM, just search thru the array (from the shift reg) until you find the matching item number, then return the whole record in the output.
    You'll have to pass the left shift reg thru the case to the right shift reg in all cases except INIT, READ FILE, and ADD ITEM.
    This means the actual storage is in the shift reg, for max efficiency.
    If you get beyound a hundred items, I would suggest a different FIND ITEM technique (keep a separate list for ITEM NUMBERS and search that, rather than the whole thing).
    This assumes you ha
    ve control of shutdown - any changes you make are lost unless you call WRITE FILE afterwards.
    Steve Bird
    Culverson Software - Elegant software that is a pleasure to use.
    Culverson.com
    Blog for (mostly LabVIEW) programmers: Tips And Tricks

  • Design Studio 1.3 : one large generic Data Source or multiple smaller Data Sources

    Dear all,
    In DS 1.3, is it still a best practice to have one large generic Data Source ? Or is having multiple smaller Data Sources a better solution ?
    Minimizing the number of Data Sources remains a golden rule, but what is the best solution :
    One large generic Data Source : and using setDataSource
    or
    Multiple smaller Data Sources : and using Load in Script and Background Processing
    Many thanks for sharing your ideas,
    Hans

    It depends on your application and how much the data is pulling in and how you want to present that to the user/consumer of the application.
    At TechEd Las Vegas last year, SAP showed a 9 dashboards (3 per row) with background processing for each row.

  • The DATA CAP MEGA THREAD....post data cap questions or comments here.

    In the interest of keeping things orderly....This is the DATA CAP MEGA THREAD....post data cap questions or comments here.
    Please keep it civil.
    Comcast is testing usage plans (AKA "data caps") in certain markets.
    The markets that are currently testing usage plans are:
    Nashville, Tennessee market: 300 GB per month and additional gigabytes in increments/blocks ( e.g., $10.00 per 50 GB ). 
    Tucson, Arizona market: Economy Plus through Performance tiers receive 300 GB. Those customers subscribed to the Blast! Internet tier receive 350 GB; Extreme 50 customers receive 450 GB; Extreme 105 customers receive 600 GB. Additional gigabytes in increments/blocks of 50 GB for $10.00 each in the event the customer exceeds their included data amount. 
    Huntsville and Mobile, Alabama; Atlanta, Augusta and Savannah, Georgia; Central Kentucky; Maine; Jackson, Mississippi; Knoxville and Memphis, Tennessee and Charleston, South Carolina: 300 GB per month and additional gigabytes in increments/blocks ( e.g., $10.00 per 50 GB ) Economy Plus customers have the option of enrolling in the Flexible-Data plan.
    Fresno, California, Economy Plus customers also have the option of enrolling in the Flexible-Data plan.
    - If you live outside of these markets you ARE NOT currently subject to a data plan.
    - Comcast DOES NOT THROTTLE your speed if you exceed your usage limits.
    - You can check out the Data Usage Plan FAQ for more information.
     

    I just got a call today that I reached my 300GB limit for the month.  I called and got a pretty rude response from the security and data usage department.  The guy told me in so many words that if I do not like or agree with the policy that I should feel free to find another service provider.!!! I tried to explain that we watch Netflix and XFinity on-demand alot and I was told that that can not be anywhere close to the data usage. I checked my router and watching a "super HD, dolby 5.1" TV show on Netflix will average about 5-6 GB per hour (1.6MB/s) ... sp this means that I can only watch no more than 1-2 Super HD TV shows a day via Netflix before I run out of my data usage.    This seems a bit redicilous doesn't it? Maybe the TV ads about the higher speed than the competition should be accompanied with "as long as you don't use it too often"   Not a good experience ... 

  • Data archieving  questioner required

    Dear All,
    We have been approched by one of our cleint for DATA ARCHIVING from R/3 system.
    Management has pushed my name in this.
    Requirement is to prepare DATA ARCHIVING QUESTIONER template.
    Can please anybody help me out in this regard from MM point of view.
    Thanking you in advance.
    Regards
    Nasir Chapparband.

    Hi,
    Refer following link;
    [http://itmanagement.earthweb.com/datbus/article.php/3109221]
    SAP Data Archiving
    1.0 Introduction to Enterprise Data Archiving
    Currently, a large number of enterprises use SAP R/3 as a platform for integration of business processes. The continuous usage of SAP results in huge amounts of enterprise data, which is stored in SAP R/3. With passage of time, the new and updated data is entered into the system while the old data still resides in the SAP enterprise system.
    Since some of the old data is critical, it cannot be deleted. The difficulty is keeping the data you want, and deleting the data you do not want. Hence, a SAP database keeps on expanding rapidly and enterprise systems, which have limited data retention abilities for a few years, suffer from problems such as data overflow, longer transaction processing times, and performance degradation.
    The solution of this problem has led to the concept of Data Archiving in SAP. Data Archiving removes out-of-date data from the SAP database that the R/3 system does not need online, but can be retrieved on a later date, if required. This data is known as archived data and is stored at an offline location. Data Archiving not only consistently removes data from the database but also ensures data availability for future business requirements.
    One rule of thumb is that in a typical SAP enterprise system, the ratio of data required to be online and instantly accessible to old data, which could be archived, and stored offline is 1:6. For example, if an enterprise has 2100 GB of SAP database, the online data, which is frequently used by SAP users will be 300 MB and the rest (1800 MB) will be scarcely used and hence can be archived.
    1.1 Data Archiving u2013 Features
    It provides a protection layer to the SAP database and resolves underperformance problems caused by huge volumes of data. It is important that SAP users should keep only minimal data to efficiently work with database and servers. Data archiving ensures that the SAP database contains only relevant and up-to-date data that meet your requirements.
    Data archiving uses hardware components such as hard disks and memory. For efficient data archiving, minimum number of disks and disk space should be used.
    It also reduces the system maintenance costs associated with the SAP database. In the SAP database there are various procedures such as, data backup, data recovery, and data upgrade.
    SAP data archiving complies with statutory data retention rules that are common and well-proven techniques.
    SAP data archiving can be implemented in two ways. In the next section both options will be discussed in detail.
    Also refer following link;
    [SAP Data Archiving Tutorial|http://www.thespot4sap.com/articles/SAP_Data_Archiving_Overview.asp]

  • Small data mart tools question

    sorry that I'm rather new to data olap!
    we have a small operational system that creates approximately 120K records a year in a single table with a couple of two level lookup tables. the time component is stored with the measure which is already aggregated to the desired "day" granularity.
    CategoryLevel1 --> CategoryLevel2 --> Measure with date <-- LocationLevel1 <-- LocationLevel2
    we want to target a "lightweight" BI design using Pentaho, Mondrian and Saiku against an Oracle database. if we need another schema then its ok to have that in the same database.
    we are considering simply using materialized views as fact and dimension tables for ETL as described here:
    http://ww1.ucmss.com/books/LFS/CSREA2006/IKE4645.pdf
    is this a common approach? are there any drawbacks that are of significance for our effort?
    appreciate any insight you can provide.

    I am not sure if this will help you, but there is a nice white paper on how the Oracle database OLAP option can be used at http://www.oracle.com/technetwork/database/options/olap/oracle-olap-11gr2-twp-132055.pdf.
    Other OLAP collateral can be found at Oracle OLAP.
    --Ken Chin

  • Need help to implement a small data warehouse or datamart

    Hi all,
    we want to improve our reporting activities, we have 3 production and relational oracle databases and we want to elaborate 1 database as reporting database with historized and aggregated data responding to our reporting needs.
    The database we are using are oracle database 10g.
    actually we still are doing query to retrieve informations from databases for reporting purpose, but fropm inetrnet search i know that we can implement datamart or datawarehouse to group all aggregated information for reporting.
    The information i need are: is there a tools in Oracle for Datawarehousing, Is Oracle Warehouse Builder is the right tools as the sources of our data are all from Oracle database and some flat files.
    Could yo advise what i'm going to use for that kind of reporting needs, can i use Oracle warehosue builder to develop ETL ...
    Do i need license to use Oracle Warehose Builder
    Thanks,

    As simple answer to all your questions: YES
    Yes, Oracle warehouse builder could be a tool to use.
    Yes, Orace warehouse builder needs a license.
    Besides that you also need a license for that extra Database.
    if you already have that, and you have the queries with which you now retrieve data, you can always choose the cheap way and build materialized views with these queries.
    Keep in mind however that a materialized view ( of snapshot ) does not support inline selects.
    HTH,
    FJFranken
    My Blog: http://managingoracle.blogspot.com
    P.S. If this answers your question, please set the thread to answered and award the points. It is appreciated

  • AIR for iOS Data Protection question again

    We are looking into Protecting Data Using On-Disk Encryption for our AIR for iOS iPad apps. An article on the adobe site (Protecting content on an iOS device with DPS | Adobe Developer Connection) mentioned this can be achieved by generating Data Protection enabled AppID/provisioning profile to pacakage in the app.
    After we packaged and published the app using the appropriately configured provisioning profile (Complete protection), we run an analysis on the iPad files.  It's reporting that the files are using an encryption class, but the wrong one.
    We run into two kinds of scenarios -
    1. For App ID that "complete" data protection service is specified, the class utilized should be NSFileProtectionComplete. Instead, the class being utilized in the files is NSFileProtectionCompleteUntilUserAuthentication
    2. for App ID without any data protection service selected, the files saved in the app documentDirectory is utilizing "NSFileProtectionCompleteUntilUserAuthentication".
    We cannot find why is it using the wrong class when specified with another class, and why are other apps utilizing the class when they weren't designed to any data protection?  Could something in Adobe AIR be overriding it or setting a default to use "NSFileProtectionCompleteUnitlUserAuthentication"?
    Any feedback is greatly appreciated. We cannot find much information on this issue but data encryption has become more and more critical now. Thank you very much.

    This is the Power View forum.
    Try asking here: 
    http://answers.microsoft.com/en-us/office/forum/office_mobile-excel-os_device_ipad?sort=lastreplydate&dir=desc&tab=Threads&status=&mod=&modAge=&advFil=&postedAfter=&postedBefore=&threadType=All&tm=1406945625798
    Thanks!
    Ed Price, Azure & Power BI Customer Program Manager (Blog,
    Small Basic,
    Wiki Ninjas,
    Wiki)
    Answer an interesting question?
    Create a wiki article about it!

  • Aggregation Questions

    Hi there
    Got a couple of questions about OWB 10g R2 aggregations.
    #1 When I create a cube with aggregations, I cannot for the life of me determine how the aggregations are actually implemented.
    Are they implemented by separate tables? materialised views?
    So far, when I browse the schema, I can't see any extra database objects created for the purpose of providing aggregates.
    #2 I have seen this problem posted by a number of people, but have not yet seen any answer on how to overcome it.
    When I create a cube, for some measures I would like to "SUM", for others I would like to "AVERAGE" and for columns such as degenerate dimensions (i.e. Transaction_ID) I would like to have no aggregation at all.
    Can anyone tell me how to achieve this using the OWB Cube object???

    Hi
    Answer to the second question:
    In Design Center you have to double click in Project Explorer on the cube you want to examine. Than Data Object Editor is launched. To change the aggregation function of certain measures you have to select Aggregation tab in the low right corner. Than in the Measures panel select the measure that you want change aggregation function for. You can now change aggregation for that measure in panel: Aggregation for measure xxx.
    Regards
    Peter

  • Excel for iOS data masking question..

    Hey guys, the contracts I use for my biz require me to have fields masked (when typing in a customer's credit card info) - I had special excel agreements built and had to give up my iPad and
    go out and buy a thinkpad tablet.
    I'm really curious though and wondering if someone can tell me whether data masking is supported yet within excel for ios. I realize macros are not...Any help would be much appreciated..Thanks

    This is the Power View forum.
    Try asking here: 
    http://answers.microsoft.com/en-us/office/forum/office_mobile-excel-os_device_ipad?sort=lastreplydate&dir=desc&tab=Threads&status=&mod=&modAge=&advFil=&postedAfter=&postedBefore=&threadType=All&tm=1406945625798
    Thanks!
    Ed Price, Azure & Power BI Customer Program Manager (Blog,
    Small Basic,
    Wiki Ninjas,
    Wiki)
    Answer an interesting question?
    Create a wiki article about it!

  • Master Data Extraction-Questions

    Hi guys,
    I was referring SAP materials on Master Data Extraction...then I read something like...
    "In Master Data Datasources,some support delta and some donot.Out of those that support delta mechanism,some use DELTA QUEUE.Some donot use DELTA QUEUE functionality and it is generally incase of small volumes of Data.Then there are some other datasources which uses ALE CHANGE POINTERS for delta mechanism."
    1.Can anyone explain how can one do delta in case of small volumes of data without using DELTA QUEUE functionality?Whats the need to go for it when we have DELTA FUNCTIONALITY?
    2.How to do delta using ALE Change Pointers?Whats the need to gofor this when we have DELTA QUEUE functionality?
    Thanks in advance.
    Regards
    Schand

    Hi Des,
    I think you are explaining the difference between "Delta Update" and "Delta Queue".I am well aware of these two things.
    DELTA QUEUE---its a temporary storage for delta records in R3 system before they are loaded successfully into BI.
    DELTA UPDATE---its type of delta update of delta records from R/3 system to BI system.
    My QUESTION is:
    In both MasterData Datasources and Transaction Data datasources,some support delta and some donot.Usually delta records will be stored in DELTA QUEUE in ECC before uploading them into BI.But some MASTER DATA DATAOURCES,donot use DELTA QUEUE to store delta records in ECC before uploading them into BI and they do this in case of small volumes of data.then Do any of you know,how do they do if they are not using DELTA QUEUE?
    Second one,SAP materials mentioned,some other datasources use ALE change pointers to determine delta.In this case also,they donot use DELTA QUEUE to store delta records before uploading into BI.What are ALE change pointers?How do we make settings for this?
    Hope I explained better.
    Regards
    S

  • Small office network questions

    I have a small office with 4-5 mac computers. I have a Mac mini set up as a file server and I use a standard cable connection for my internet service. I use a wired router (ethernet cables) all going to a netgear switch.
    Just bought a mac mini and a drobo storage device. I have successfully set up the drobo on the mac mini and I can "see" the files and read & write to the external drive. I also have a few other people in the office -- which will need access to the drobo via the network, but have a few questions there:
    1. I don't see the name of the other computers that can connect to the unit. When I get on one of the other machines and look for the drobo I can find it & edit files but from the mac-mini side, I can't see the proper name of the other computer. How do I do this?
    2. Can I limit which folders are accessible within the drobo that attached to the mac mini
    3. Am I missing anything from a safety standpoint? Can Anyone come into my office and access the files that are on the mac/mini drobo? Worse off, is the mac mini vulnerable to the outside world with this setup?
    Thanks for the help, new to all this networking stuff.

    It's been a week, so i don't know if you have already worked this out, but...
    while I am not familiar with your router, when I have used linksys before, i find it better to use static IP addresses for everything.  Try setting static IP addresses and make sure everyone's mask is 255.255.255.0.  Some routers also have a flag to allow computers to see each other.
    Best of luck.

  • Another Unlimited Data Upgrade Question

    (Yes, it's another one of those questions.  I'm sorry, but searching - on here, or on Google - only left me with conflicting information.)
    Here's my question:
    A friend of mine is interested in selling me his Galaxy Nexus phone.  Would I be able to keep my single-line, grandfathered, unlimited data plan if I buy his phone and switch to it, or is that only possible if I were to buy the phone new, directly from Verizon?
    Here's my story:
    I have an HTC Thunderbolt that I purchased when it launched.  A few months after buying it, it turned into the HTC Bad Dream, and now it's the HTC Nightmare.  Random heat issues, battery draining, random reboots... It's the same song and dance we've heard before.  These are design issues with the phone; not something a replacement or a repair would fix.
    The real annoyance, though, is the mobile hotspot.  I pay $30/month to use the hotspot legitimately, unlike the users who root their phones to use it without paying for the option, but ever since Verizon started blocking the third-party wireless tethering apps, I've had to choose between using the stock mobile hotspot app and unleashing the above-mentioned Phonemageddon, or simply going without.  I could drop the option from my plan, but seeing as the unlimited data plan no longer exists, I wouldn't be able to get it back without switching to a tiered or shared plan.
    Switching to a more stable phone would alleviate my issues, hopefully, but the difference between Verizon's retail prices and other retailers is hundreds of dollars.  Giving up my unlimited plan, however, wouldn't make the venture worth it.  A definite answer to the question of buying used - either from a Verizon representative, or a customer who's been in the same scenario before - would be very helpful.
    Thank you very much.

    Hi,
    If you provide your own equipment (your friend's Nexus or from ebay etc) then YES you can keep unlimited data. Another wayto keep unlimited would be to pay full retail for a new phone. But I'm sure your friend will give you a better deal than that
    Hope that helps!

Maybe you are looking for