Finding Low-Cost Places to Live in the U.S.

I was interested in learning which places in the United States might offer the lowest cost of living. Although people would vary in their freedom and willingness to move anywhere, it seemed that the sources of data I found in this exploration could be useful for a variety of purposes. Needless to say, the focus on low-cost living was not intended for wealthy people, who could have very different concerns and might be choosing from a very different set of options.

Summary: The Low-Cost Index (LCI)

This post describes the process by which I came around to the conclusion that the MIT Living Wage Calculator (LWC) provides the best low cost-of-living (CL) estimates in various American cities. To get to that point, I compared the MIT LWC against other cost-of-living indexes (CLIs), and I also considered the amounts that the MIT LWC calculated for various cities. These steps resulted in the list of cities provided below.

Contents

Cost-of-Living (CL) Calculators
Sources of U.S. CL Data
Technical Notes
Comparing Housing Expense Data Sources
High- vs. Low-Variability CL Data Sources
Dollar-Based Sources
Narrowing the Scope of an Accurate Low CLI
Developing the Low-Cost Index (LCI)
The LCI: 227 Low-Cost Cities

.

Cost-of-Living (CL) Calculators

My exploration suggested that some data sources provided CL information on thousands of cities and/or counties across the U.S., while others covered, at most, only a few hundred, or even a few dozen. Plainly, a source covering only states (e.g., New York) or metropolitan statistical areas (MSAs) (e.g., “New York-Newark-Jersey City, NY-NJ-PA”) could grossly overstate or understate the cost of living within specific neighborhoods in that large area. For instance, a ghetto might account for 10% of the households within a city, but only 3% of the households across a large MSA. In that example, it would be more accurate to work with a data source focused on individual cities rather than MSAs.

So I started with a particular interest in the more comprehensive CL data sources that would cover expenses at the city level. (Note that, in some calculations, I excluded New York, in particular, for this very reason — because some sources did not clarify whether they were talking about the entire MSA or the very differently priced New York City, not to mention the vastly different borough of Manhattan.)

I found that many sites offered CL calculators. Some such calculators required a salary input, reducing their appeal for people who were switching careers, retired, full-time students, or otherwise not willing or able to talk in terms of salaries. Most such calculators presented their results in terms of index values, where the base level would typically be set at 100 and the comparison city would be expressed as a percentage of that base value. For example, a calculator might use the average of all American cities as base = 100, and might then say that the Boston CL = 150, meaning that it costs 50% more to live in Boston than in the average U.S. location.

CL calculators drew upon one or more data sources. Some relied solely upon data they collected; some used data produced by governmental agencies or other sources, perhaps with some of their own data mixed in. When multiple sources were used, they could be combined in different ways. For instance, someone might feel that sometimes Sources A and B provided irregular data, but that the average of the two would help to smooth out those irregularities. Data sources could also be weighted differently. For example, the people producing one calculator might think that housing costs (drawn from a governmental database) should count for 30% of total monthly expenditures, while the people producing another calculator might think housing was more like 35%.

Some CL calculators stated where they got their data; others didn’t. Even those that did specify data sources did not generally provide details on such questions as when they had last updated their data or how they combined multiple sources. Most of these calculators were black boxes: you provide input, you get output, and you really have no idea how they calculated that, or how accurate it might be.

To test this, I used various calculators (entering a salary of $25,000 if necessary) to compare CL for Silver City NM against Kendallville IN. Most (e.g., Salary.com, CityRating, WolframAlpha, MoneyGeekCNN-Money, BankrateNerdWallet, PayScale; EPI) could not calculate this, because these two cities were not in MSAs or were not in the calculator’s database. Others did produce a result, but disagreed: SmartAsset said that Kendallville was 2% cheaper than Silver City; Sperling said it was 7% cheaper; AreaVibes said 12% cheaper; RelocationEssentials said 6% more expensive.

I tried again, comparing several of the MSA-oriented sources on the difference between New York and Reno NV, both of which were MSAs. Bankrate asked where in New York, and I specified Brooklyn; the others didn’t. Salary.com said Reno was 44% cheaper; PayScale said it was 55% cheaper; Bankrate said it was 39% cheaper. Generally, it did not seem that I was getting consistent answers to my questions from these calculators.

Sources of U.S. CL Data

Rather than rely upon simplistic calculators of uncertain accuracy, it seemed I should examine the databases on which such calculators (and other sources of CL data) were based. The goal would be to perform simple statistical analyses across dozens if not hundreds of cities. With that quantity of data, it would be easier to notice extreme cases, unrealistic results, and general differences. I might see, for instance, that one source was pretty consistently 5% higher than another. That wouldn’t necessarily tell me which one was more accurate; but if I then added or subtracted 5% to set them approximately equal, it would be easier to see where they really disagreed, and to try to figure out why.

Ideally, to perform those analyses, I would not sit there all day, doing manual, one-at-a-time lookups of single cities, like those mentioned in the previous section. For this purpose, I wanted to be able to download numbers for lots of places at once. That was problematic because, as noted above, often these numbers were proprietary. As it turned out, there seemed to be only a few places where I could obtain CL data for relatively large numbers of cities.

C2ER

Among the CL calculators (above) and other sources of CL information, I saw many references to the Council for Community and Economic Research (C2ER, formerly ACCRA). C2ER explained that its cost of living index (CLI), published continuously since 1968, included prices on “over 60 goods and services collected at the local level from over 300 independent researchers” and was “routinely cited by the Wall Street Journal, Forbes, Kiplingers, CNNMoney . . . and many other national media outlets.” I saw that numerous academic sources also cited C2ER. Generally, C2ER appeared to be a reputable source.

Unfortunately, access to C2ER data was limited. C2ER did offer a tool permitting CL comparisons between up to five sets of cities for $5-8 per comparison. There were some free alternative ways to access C2ER data. For example, LocalDatabase offered a version permitting free C2ER comparisons for just one set of cities; InfoPlease offered an old (2010) list of C2ER annual average CLI values for 325 U.S. cities; and the Missouri Economic Research and Information Center (MERIC, 2016) offered a more recent annual C2ER list for the 50 states. Access to the full C2ER database evidently required an organizational membership costing $650 per year. It appeared that C2ER also produced a quarterly report on current prices, for a price of $95 per issue or $175 annually. ACCE seemed to say that at least some individuals or organizations (i.e., local chambers of commerce (CoCs), according to Rex, 2014, p. 2) could qualify for a free subscription to that quarterly report, apparently in exchange for becoming one of C2ER’s “independent researchers” — in other words, by contributing data on prices in their cities.

It seemed, then, that C2ER data came from prices recently supplied by CoC employees in various cities. For example, the C2ER quarterly report published in October 2016 (referred to here as C2ER Oct2016) provided data on 261 cities, not entirely overlapping with the numbers of cities on which other C2ER quarterly reports provided data. C2ER Oct2016 (pp. i-ii) seemed to say that these CoC employees submitted prices for 57 goods and services believed to be representative of consumer spending, particularly among upper-income (i.e., top 20%) households. That income bracket would consist, according to the U.S. Census Bureau (Proctor et al., 2016, p. 8), of households with incomes of at least $117,003 per year. Wikipedia indicated that persons in that top income bracket included doctors, lawyers, and people in certain other professions and academia (e.g., physicists, mathematicians, nuclear engineers). The most successful business owners would also populate that stratum.

In the case of housing, the skew toward upper incomes in C2ER’s calculations meant that 71% of average monthly expenditure was assumed to go toward mortgage payments and only 29% toward apartment rental, while an index more oriented toward lower-income people (e.g., the Living Wage Calculator, below) might be based mostly or entirely on rental costs. In addition, C2ER Oct2016 cautioned that CLI data for a given city could vary according to whether those data reflected suburban or central city prices. Of course, there could also be considerable variance according to the particular stores or other sources that the data contributor visited. Easton (2015, p. 10) found that C2ER provided mediocre to poor guidance for housing prices.

Federal Government Sources

The U.S. Bureau of Labor Statistics (BLS) had most recently (as of April 2017) produced a report on consumer expenditures in 2015. That report offered detailed consumer spending breakdowns by a variety of criteria (e.g., education, household income). Unfortunately, the criteria did not include city of residence. BLS (like others) offered a CPI inflation calculator that was based on BLS CPI data (see FAQs), but BLS indicated that its available CPI data included city-specific values for only 26 U.S. cities. BLS did not provide links from that page to the detailed reports on those cities, but eventually I found a page through which it appeared possible to look up the available BLS CPI data on specific locations. In any case, however, BLS said that its CPIs for specific areas were intended only to show how prices change over time in the same area, and were not suitable for comparison against other areas.

As another possible source of U.S. city CL data, according to Investopedia, the Personal Consumption Expenditures (PCE) index prepared by the U.S. Bureau of Economic Analysis (BEA) was similar to the BLS CPI. BEA provided a more elaborate explanation of the differences. It appeared that PCE results could be obtained on a per capita basis, but were reported only at national and state (not city) levels.

Alternately, BEA calculated Regional Price Parities (RPPs). Business Insider explained that BEA’s RPP was “an index that sets the national average cost of goods and services at 100.” Rex (2014, pp. 2-3) described BEA’s RPPs as being based on, and an alternative to, the BLS CPI. Rex said that C2ER’s CLI fluctuated more than RPPs, from city to city, because C2ER included house prices, and those varied considerably from one region to another. Nonetheless, Rex found a high degree of correlation between C2ER’s CLI and RPPs. BEA’s RPPs used “about 1 million [price observations] per year” (Aten et al., 2012, pp. 232-233); they were used to estimate CL in, for instance, the Indeed.com report on high-paying tech jobs (2016). I was able to download RPPs for 383 locations. So this was a potentially useful CLI data source. It did have the drawback of providing data only for MSAs; also, like other sources considered so far, it was an index; its conclusions were not stated in dollar amounts.

As a policy comment, at a time when the federal government was cutting back on data collection and analysis, it seemed inappropriate that such an intensely mobile nation would not be collecting basic, reasonably reliable information on the costs of living in its various cities.

Comprehensive Lookup Sources

In addition to the foregoing sources from which it was possible to obtain CLI values for dozens if not hundreds of U.S. cities, there was the option, noted above, of doing city-by-city lookups, one city at a time.

One such lookup source came from the Economic Research Institute (a/k/a ERI or SalaryExpert). ERI provided a CL calculator permitting comparisons between two cities, with a focus on equalizing salaries. That calculator, and ERI’s CL lookup tool, appeared to draw upon ERI’s database of more than 10,000 cities worldwide, although spot checks revealed that data were not present for some locations. It appeared that ERI’s calculations would be intended to estimate the CL of middle- and upper-middle-income workers whose relocations would qualify for CL adjustments in salary. For human resources professionals, ERI claimed that it sold access to its database to “most of the Fortune 500 and thousands of small and medium-sized companies,” although its list of customer testimonials was quite modest. I saw that ERI was cited with some frequency in the popular press, though rarely by academic sourcesERI said that its CL data were “derived from ERI’s cost of living surveys and web digitization of public domain records.” I did not see an explanation of the surveys, nor clarification of what “digitization of public domain records” meant.

In the continuing quest for a data source that would tell me the costs of living in different cities, I spent the better part of a day attempting to develop suitable weightings for CareerTrends data on average monthly expenses for various family sizes and configurations. Eventually I found that the CareerTrends data came from the Economic Policy Institute (EPI). EPI did not appear to offer data for wholesale download, but did provide a family budget calculator (FBC) permitting CL comparisons — in actual dollar rather than index terms — for households consisting of one or two adults and zero to four children (each assumed to differ by four years in age), in up to three of 618 geographical areas. Gould et al. (2015) explained that those geographical areas included 85 “family budget areas,” 485 single-state MSAs, and rural areas in 48 states. Gould, the economist behind EPI, said the primary components of the budget were rent, food, transportation, child care, healthcare, taxes, and other necessities. Data for these components came largely from other national data sources (e.g., U.S. Department of Housing and Development, for rent; U.S. Department of Agriculture, for food costs; non-governmental organizations, for costs of child care and taxes). The general concept of the EPI FBC was to measure “the income a family needs in order to attain a modest yet adequate standard of living.”

That wording sounded somewhat like the idea of the “living wage,” defined by Wikipedia as “the minimum income necessary for a worker to meet their basic needs.” The Living Wage Calculator created by an MIT professor (MIT LWC) offered monetary (i.e., dollar, not index) values stating the amounts needed for a living wage, apparently covering thousands of U.S. cities and counties. The target, here, was lower than that of the EPI FBC (above): the more detailed explanation indicated that the living wage was construed as “the minimum income standard that, if met, draws a very fine line between the financial independence of the working poor and the need to seek out public assistance or suffer consistent and severe housing and food insecurity.” Like EPI FBC, MIT LWC offered options for different household compositions. Nadeau (2017) described MIT LWC’s sources; they were similar and in some cases identical to those used by EPI FBC. Nadeau also said (p. 2) that the adults in the various household compositions alternatives covered by MIT LWC were assumed to be employed full-time unless otherwise stated (as in, specifically, the “2 Adults 1 Working” scenarios). As such, the LWC was not designed for adults who were fully retired or working only part-time, and thus would not have work-related expenses (e.g., clothing, meals, transportation). My search did not lead immediately to estimates of the CL increases (for e.g., healthcare) or decreases (e.g., not commuting, not buying work clothes) experienced by the average full-time retiree. Unlike ERI and other black-box solutions, EPI FBC and MIT LWC seemed to be fairly open about their sources and uses of data.

Note that both EPI FBC and MIT LWC included an income tax component in their expense estimates. People with little or no income (due to e.g., retirement) would ordinarily want to remove the tax component from the dollar amounts quoted by these sources, so as to arrive at their actual CL. In this calculation, people with little income might have a higher expense level, if they no longer qualified for the Earned Income Credit or other tax benefits due to lack of income.

Limited U.S. CL Data Sources

The foregoing sources offered CL index or dollar values for hundreds if not thousands of U.S. cities and/or counties. I did not find other sources providing more or better information for the U.S. as a whole.

There was a different kind of data source, however, that seemed to require some attention in this post, namely, sources providing CL information on cities worldwide. As described in a separate post, there were many of those international data sources, but only three of them — Numbeo, The Economist Intelligence Unit (EIU), and Expatistan — also had entries for a fair number of American cities. These international sources seemed worth covering in this post for two reasons. First, they gathered their data in their own ways, and might thus provide some perspective on the conclusions presented by the foregoing domestic sources. Second, for purposes of that other post, I was interested in learning how well these international sources corresponded with American sources. If Numbeo, for instance, seemed to provide inaccurate information about CL in U.S. cities, there would be a question of whether Numbeo’s coverage of CL abroad was accurate.

Technical Notes

To compare multiple sources of data on CL in various U.S. locations, it was necessary to make various calculations and adjustments. I did not plan to do anything too complicated in this regard. Readers interested in understanding and perhaps refining my calculations may appreciate an explanation of concepts related to the steps I took.

Ranking vs. Indexing

A ranking is a list, arranged in declining order. For example, in the list of countries in the world with the largest population, India is expected to climb the ranking from position no. 2 to position no. 1. Rank positions do not necessarily say anything about amounts. For example, China presently ranks no. 1 in population, Japan ranks no. 10, and Togo ranks no. 100, but that does not imply that China has ten times as many people as Japan, or 100 times as many as Togo. It just means there are nine countries with more people than Japan, and 99 with more people than Togo.

An index does not depend on any arrangement, though it is common to sort items by their index value. An index value indicates how one item compares to a “base” or “reference” item. The base value is typically set to 100. Index values tend to be easy to talk about and compare, without knowing the underlying details. Suppose, for example, that we created an index whose base value was the population of New York City in each year. If the index value for another city was 108 in 2004 but grew to 116 in 2014, we would know that the other city was 8% larger than NYC in 2004, that it was 16% larger than NYC in 2014, and that it was growing faster than NYC between 2004 and 2014. We would know these things without having any idea how large NYC or the other city actually is. This would be true regardless of ranking: there might be no cities whose size was between that of NYC and this other city in 2004 (i.e., with index values of 100 to 108 that year), or there might be a dozen such cities.

Normalizing

For purposes of this post, normalizing means converting numbers to index values — and making sure, moreover, that the lowest index value is zero and the highest is 100. Conversion of temperature, from Fahrenheit to Celsius, is an example: you start with Fahrenheit’s weird scale, where water freezes at 32 and boils at 212, and you end with the nice, orderly Celsius scale, where water freezes at zero and boils at 100.

Normalizing is useful to make different scales comparable. So if one data source tells us that its lowest-ranked city has a CLI = 18.2 and its highest-ranked city’s CLI is 414, while another data source says the CLIs for those two cities range from 3 to 24, we can use basic arithmetic to put them both onto a single scale, from zero to 100, and compare their values straight across. In that example, 18.2 on one scale equals 3 on the other, and likewise 414 on one scale equals 24 on the other.

Equalizing

For purposes of this post, equalizing means making two data sources similar in some way. The primary example would be adding a constant value to equalize the means of two CLIs. So, for example, if the average value provided by Index A was 50, and the average value provided by Index B was 60, those two sources could be graphed, in a way that would highlight their similarities and differences, by simply adding 10 to each data point in Index A. Note that, while this sort of treatment could be informative, it could also distort the realities for some purposes.

Method of Comparison

The general mission, here, was to compare normalized data sources and see whether any of them behaved oddly. An example of odd behavior would be to characterize an expensive city as inexpensive. The purpose of such comparisons would be to see whether one data source was more believable than another. This was necessary because we did not have a God’s-eye perspective going into this: we didn’t know absolute truth, as to which source of information was most reliable. All we had was the ability to compare these sources against each other and see whether any of them seemed particularly believable or unbelievable.

Selection of Data

Data sources did not necessarily cover exactly the same cities. To compare apples to apples, we would want the most recent available data for the maximum possible number of points in common. So there was a question of which cities I would compare, from the total of eight sources that I had decided were worthwhile.

I started with the lists of cities in the U.S. covered by the three international sources just mentioned. Numbeo offered CLI values for 112 U.S. cities; Expatistan offered CLI values for 50 U.S. cities; and EIU offered CLI values for 16 U.S. cities. Those three lists overlapped; altogether, they formed a list of 116 cities for which at least one of those three sources provided a value.

Next, the domestic sources. C2ER Oct2016 provided CLI values for 261 cities, and BEA’s RPPs covered 383 locations. After discounting overlap, those two named a total of 457 places. And then there were the three comprehensive lookup sources: ERI, EPI FBC, and MIT LWC. I assumed one or more of those would have entries for most if not all of those 457 places. I did not assume, however, that I was willing to look up hundreds of entries, for each of three different sources.

When deciding how many cities to look up in those three lookup sources, I decided to make sure that my comparison included all of the cities for which Expatistan and EIU (i.e., the smallest sets) provided values. In addition, I had already looked up values for a number of cities in ERI and MIT LWC, so I included those cities. Further, I made sure the selection included the three highest- and lowest-scored cities evaluated by each of the non-lookup sources. I had no way of knowing which were the most- or least-expensive cities in the lookup databases, since their full listings were not accessible, but I did make sure the selection included each of Investopedia‘s ten most- and CBS MoneyWatch‘s ten least-expensive cities for 2017.

That gave me a total of 82 cities to compare. Each of these cities was evaluated by at least three sources. Sources offered various values. I used those values as follows:

  • Numbeo offered a CLI (excluding rent), a rent index (RI), and other indices. For reasons discussed in the other post, I did not use indices other than the CLI and RI.
  • EPI FBC and MIT LWC offered a variety of household configurations. From each of those two sources, I chose the single adult, no children configuration. In previous comparisons of MIT LWC values for different configurations for no-child household estimates in a number of cities, I came to the impression that, compared to the “1 Adult” configuration, the “2 Adults” configuration averaged 56% more expensive, and the “2 Adults, One Working” averaged 60% more expensive.
  • EPI FBC stated dollar estimates for each component of household expense. I kept the EPI FBC total, but also combined those components into two subtotals. The CLI subtotal excluded rent; the Rent Index (RI) subtotal consisted of the stated Housing amount. EPI FBC said that the Housing component included rent and traditional utilities but not phone, cable, satellite, or Internet service.
  • Housing was included in the CLIs for MIT LWC, Expatistan, and ERI. C2ER offered a composite CLI as well as a separate breakdown of its components, including housing; EIU said that its CLI did not include housing. Where housing was excluded, it was not clear whether any utilities were also excluded.
  • BEA’s RPP documentation seemed to present a complex weighting scheme for rents and other expenses. It did not appear that I could disentangle it, so as to present a non-rent CLI. Instead, I used the Composite value, which included rent expense, and also used the Rent value.

In short, I was about to compare CL in 82 cities, using eight different sources providing a total of 14 different measures, some of which had a rent or housing component. To make these measures comparable, I normalized them. It made sense to set the bottom ends of each normalized measure equal to zero, with one exception: EIU, alone among these eight, did not appear to capture a spectrum of cities, from rich to poor. Its selection was limited to larger and relatively more expensive places. It would have been incorrect to characterize EIU as claiming that Atlanta, its least expensive city, was one of the cheapest cities in the U.S. Therefore, I set the bottom end of EIU’s normalized scale to 36, which was the average of the other normalized non-housing CLIs for Atlanta.

Comparing Housing Expense Data Sources

As just indicated, I had found various sources of U.S. CL data, and was now about to start applying those sources to a set of 82 cities. The purpose, at this point, was to determine whether some sources were more credible than others.

I began by looking at those sources’ housing- or rent-related indices (RIs). RIs were provided by Numbeo, C2ER, BEA’s RPPs, and EPI FBC. My question was whether any of those four RIs, normalized, was responsible for an unusual number of high or low values. It developed that BEA’s RPPs tended to be high, possibly because of their orientation toward MSAs, as discussed above. In Mobile AL, for instance, the normalized values for the three other RIs were all in the range of 5 to 11 (maximum possible value = 100), but the normalized value from BEA’s RPPs was over 25.

It seemed that problem might be mitigated — that the four RIs might be presented as being in closer sync with one another — by equalizing them. Doing so yielded a graph demonstrating some agreement among them (click to enlarge):

The graph seems to indicate that the four RIs, normalized and adjusted, displayed rough agreement on the rank and magnitude of housing price differences among most cities — until we reached the more expensive cities, mostly on the East and West coasts. At that point, as indicated by the bend in the black trend line, almost every city along the line was considered, by at least some sources, to be noticeably more expensive than the previous one. Of course, any graph could be made to display dramatic changes, simply by removing data points between its start and its end. The right side of this graph could be made less dramatic by adding more expensive cities. But if every city in the U.S. were added, the right side would still have a relatively sharp upward bend because, relatively speaking, there just weren’t that many expensive cities in the U.S.

Also, while the disagreements between the four RIs would predictably become more pronounced when dealing with larger numbers, the change in variability at this point was sudden and dramatic. Honolulu illustrated that. At one extreme, represented by the yellow peak at the top right corner of the graph, Honolulu’s rents were the most expensive of any city I examined in the EPI FBC. At the other extreme, displayed in the blue line dropping down in the opposite direction from thta yellow peak, Numbeo’s raw RI value for Honolulu was 67.28, meaning that Numbeo considered rents in Honolulu only about two-thirds as expensive as in NYC.

There seemed to be several possible explanations for the relatively chaotic situation among RIs in the most expensive cities. For one thing, BEA’s RPPs and EPI FBC were responsible for almost all of the highest normalized RI values for those cities. As mentioned earlier, the RPPs were based on MSAs, which could dilute the effect of the poor parts of town, spreading out the lower costs of those areas across a large and relatively wealthier set of cities. Moreover, the people behind EPI FBC, seeking to fulfill their mission of pricing “a modest yet adequate standard of living,” may have found that it is expensive to achieve an “adequate” standard of living, comparable to what one might readily find elsewhere, in a city where you could spend a million for an apartment the size of a breadbox. (Several of these factors would be at play in the case of NYC, which proved problematic throughout my research, insofar as not all sources would make clear whether they were talking about Manhattan as distinct from the five boroughs, the immediately surrounding area including parts of NJ and CT, or perhaps the entire MSA.)

There could also be intended or unintended distortions in the input data. For instance, the ordinary people volunteering data for Numbeo, and the CoC employees volunteering data for C2ER, may have been inconsistent as to whether the data they provided came from more- or less-expensive parts of town; they may have tended to be more or less careful with their money as a class, and their purchases may reflect that; they may have been biased in favor of understating the costs of their present locations (e.g., hoping to make their cities seem more appealing) or overstating them (e.g., to make their cities appear wealthier). The relatively random users of Numbeo and Expatistan probably included a greater proportion of ignorant, inept, indifferent, and malicious (e.g., competing; vandalistic) individuals, compared to C2ER, whose CoC employees would typically have collected these data on previous occasions, sometimes going back years; who would have at least the possibility of guidance from others similarly situated; who might thus have relatively informed and consistent ways of doing it; and who would presumably have some commitment to the quality of the database. Granted, the CL data for a specific city would evidently depend upon the information provided by just one CoC — often, no doubt, just one CoC employee — and yet, for that very reason, there could be a sense of visibility and responsibility not possible in the concealed Numbeo and Expatistan databases.

I had originally considered combining the RI and non-RI components of some of my data sources, to explore and develop various ways of calculating expense totals. As I became increasingly attuned to the complexities and uncertainties of these various data sources, however, I decided to narrow my focus by turning my attention instead to the sources providing estimates of total CL, with or without rent components.

High- vs. Low-Variability CL Data Sources

I had started with eight different data sources, providing a total of 14 indices or other measures. I had looked at four of those indices in connection with rent (above). That left ten indices on the subject of general CL. This relative wealth of information seemed likely to help identify the most reliable CL data sources.

Now I proceeded to make adjustments as described in the preceding section, so as to facilitate direct comparison among these ten remaining indices. Once again, I noticed two different trend lines: a gradual one, for most cities, and a steep one, for the most expensive cities. Given this post’s focus on affordable places, I decided to drop the cities that most of my sources considered especially expensive. I identified those cities by calculating the median values given to each city by the modified CLIs I had developed, and identifying the approximate point where the direction of the trend line changed. As a point of comparison, I saw that the more expensive cities that I was eliminating from further consideration were, with one exception, the same as those for which MIT LWC estimated CL above $22,000 for one adult with no children. Common sense would also have named some of those same cities (e.g., NYC, L.A., San Francisco, Honolulu) as especially expensive.

That step left me with 64 less expensive cities. At this point, it seemed advisable to eliminate EIU from the set of ten indices. This decision was due to the facts that the EIU CLI contributed so few data points; so many of them were inconsistent with their peers, as detailed in the other post; and many of the data points that EIU did contribute were for large and more expensive cities that were not really germane to (and had now been eliminated from) this search for affordable places.

That left me with nine indices. Among these, I counted those that were responsible for a disproportionate number of maximum or minimum values. For instance, Numbeo’s raw CLI put Indianapolis at nearly the expense level of San Diego. That seemed unlikely to me, having lived in Indy and having also spent some years in Southern California. Relying on the median of all nine indices, in their adjusted form, San Diego was one of the expensive cities that I had just eliminated. So, clearly, Numbeo’s conclusions were atypical in that case. In fact, I now saw that, among my set of 64 cities, Numbeo had evaluated 49; for 15 (31%) of those 49, Numbeo’s evaluations (as adjusted) were the highest of any of my nine indices; and for another 15 of those 49, Numbeo’s adjusted evaluations were the lowest. This was visible when I graphed the comparison: Numbeo was responsible for many peaks and valleys, repeatedly departing sharply from the tendency of most other sources.

Numbeo was not the only source that seemed to be marching to a different drummer. Thirty percent of Expatistan’s equalized evaluations were the lowest provided by any index; 25% of the evaluations provided by BEA’s RPPs were the highest, as anticipated above, but also 19% were the lowest; and for EPI FBC (without the housing component) 22% were the highest and 14% were the lowest. These contrasted against four other indices from whom less than 10% of CL evaluations were the highest or lowest provided by any index — namely, C2ER with and without housing; ERI; and EPI FBC with housing. MIT LWC was between the two groups — less likely than the first group to quote the highest or lowest CL for any city, but more likely than the second group. In other words, there appeared to be a set of indices whose values clustered around the mean, and then there appeared to be a set of more erratic indices, whose values departed more significantly from the mean, in both positive and negative directions.

Of course, those more variable indices could be pointing at a truth that the less variable indices just didn’t want to hear. The problem with that theory was that these Messiahs were pointing in different directions. In many cases, that is, when one of these four would ascribe a high value to a city, another of these four would ascribe a low value. Consider Minneapolis. EPI FBC (without housing) said it was quite affordable, ranking below Detroit, while Numbeo said it was more expensive than Silicon Valley’s San Jose. Another example: Flint MI. Expatistan rated Flint as one of the cheapest cities in the country, whereas BEA’s RPPs said it was more expensive than state capitals Springfield IL or Columbia SC. Generally, this graph illustrates the contrasts between one of the four low-variability indices and the four high-variability indices:

In this graph, the four high-variability indices generate a lot of noise. The low-variability index displayed by the thick white line (C2ER with Housing) tends not to go to the extremes visible in its four high-variability counterparts. Instead, it runs fairly close to the mean of multiple indices.

The right side of the graph shows the start of the upward sweep that was more visible in the previous graph. It seems that perhaps six to ten more cities might have been lopped off the right side, to focus just on less expensive cities. I left those cities there, as a reminder of what lies beyond. Meanwhile, the left side of the graph suggests the possibility of a continued sharper downward slope, among smaller and more remote cities and towns not evaluated by these CLIs.

To be sure, the numbers could be graphed differently. Look, for instance, at the blue line for Numbeo, in the preceding graph. It shows Numbeo jumping all over the place. That’s because the cities shown in that graph were sorted in the order suggested by the average of the low-variability indices. Numbeo looked much less volatile when the data were sorted to present Numbeo’s concept of which cities are most expensive, as in the yellow line here:

And yet, this graph also shows why Numbeo’s CLI should not be driving the discussion. Notice, for one thing, how the four low-variability indices graphed here do not fluctuate as wildly as the high-variability indices did in the previous graph. Note also how they move in a roughly similar pattern, tending to go up or down somewhat in harmony, displaying approximate agreement on the values appropriate to most cities — quite unlike the high-variability indices depicted in the previous graph. That is, if the cities graphed here were arranged according to the four low-variability sources, we would see a more gradual upward progression, from left to right, with substantial agreement among those sources.

Consider, in addition, the order in which Numbeo would have us list the cities: contending that Minneapolis is more expensive than Fort Lauderdale, for instance, and that Wichita is far more expensive than Reno. Granted, Numbeo’s CLI excluded real estate. But it was doubtful that such findings made sense even then. Certainly they were not supported by the two other non-real estate indices (i.e., C2ER and EPI FBC without Housing).

As noted above, several factors suggested a greater risk of inaccuracy in the user inputs provided to Numbeo than in those provided to C2ER. Those considerations, and the variability discussed here, may help to explain why, as pointed out earlier, numerous media outlets (e.g., CNN-Money, Bankrate) used C2ER as their CL source. While I saw that many ordinary people cited Numbeo, it did not seem to me that many substantial corporate interests relied upon it.

When you have nine indices hovering around a mean, it is not unrealistic to think that the mean generally reflects the best collected wisdom, and that the indices closest to the mean, like C2ER, are probably more accurate than those, like Numbeo, that depart frequently and dramatically from that mean. As a point of interest for later discussion, two of the five indices that I have not labeled as high-variability (i.e., EPI FBC with Housing and MIT LWC) were both originally expressed in dollar terms, before I converted them into index terms for comparability. Compared to a relatively abstract index, a longstanding source stating dollars might be subject to greater pressure to conform to reality. Saying that a city has a CLI value of 35.8 is not as clear, nor as subject to users’ commonsense reactions, as saying that it costs a single adult $43,500 to live there for a year.

The variability of CLIs may also have been influenced by the presence or absence of a housing component. I saw that two of the four high-variability indices lacked such a component, while three of the four low-variability indices included such a component. All other things being equal, there would presumably be more pressure to yield believable results when users were given total CL results, leaving no room for a hard-to-test claim that, say, Reno really is more expensive than Philadelphia, not counting rent.

Finally, it seemed that not all of the low-variability indices were entirely independent. For one thing, the foregoing graph does not display both of the C2ER indices, since they were so similar to one another in terms of method and data sources. Notice, in addition, the close correspondence between C2ER and ERI (i.e., the two greenish lines). You may recall that ERI did not care to specify where its data came from. They say brilliant minds work alike. ERI’s people may have followed a line of reasoning not unlike the one unfolding in this post: reaching the conclusion that C2ER provided a relatively accurate available statement of CL in various cities, and tweaking it to improve it or make it more consistent, perhaps in line with prior C2ER reports or other sources. Of course, C2ER could not be ERI’s only source; ERI covered far more cities than C2ER. But given their data similiarities, it did appear somewhat reasonable to characterize ERI’s CLI as an expanded, more accessible, and possibly refined portal into C2ER’s pricey data. It appeared, in other words, that C2ER was to a fair degree redundant and underdeveloped compared to ERI. Going forward, this perception (and ERI’s availability and other advantages) led me to lean more on ERI and less on C2ER.

The conclusion reached in this section was that the four high-variability CLIs tended not to provide the most stable, consistent, and convincing guidance. Moving forward, it seemed wise to concentrate upon the lower-variability indices. To eliminate redundancy and remove a bias in favor of one source, it made sense to use only C2ER with Housing, and not C2ER without Housing — and then, as just noted, to use ERI rather than C2ER for the most part. Thus, at this point the guiding indices seemed to be ERI, MIT LWC, and EPI FBC with Housing (referred to henceforth as simply EPI FBC).

Dollar-Based Sources

As noted above, two of my data sources provided dollar amounts rather than index values. MIT LWC strove to estimate, in each city, the minimum amount necessary to hit the line separating “the financial independence of the working poor” from “the need to seek out public assistance,” while EPI FBC sought “the income a family needs in order to attain a modest yet adequate standard of living.”

Those differences in orientation — “minimal” vs. “modest” lifestyles — meant a great deal. For the 77 cities for which I had usable lookup data from both sources, the mean was $30,119 for EPI FBC vs. $20,433 for MIT LWC, for a difference of $9,686. The largest amounts needed to achieve those objectives in any particular city were $46,308 for EPI FBC, in Honolulu, and $29,069 for MIT LWC, in San Francisco. The greatest difference between the two sources in any city was $18,757, in Honolulu. The smallest amounts needed were $23,838 for EPI FBC, in McAllen TX, where the difference between the two sources was also at its minimum (i.e., $5,502), and $16,935 for MIT LWC, in Danville IL.

Generally, EPI FBC’s modest lifestyle in a particular city could be anywhere from 30% (McAllen) to 68% (Honolulu) more expensive than MIT LWC’s minimal lifestyle. That percentage difference did not increase linearly with CL. Indeed, the two indices diverged markedly for some cities. It did not seem that one would get good results by assuming that the EPI FBC modest lifestyle could be calculated just by adding a certain amount or percentage to the MIT LWC minimal lifestyle (speaking, as always, of the single adult, no children household):

Did MIT LWC and EPI FBC share a common sense of how much it cost to live in a given city, at either the minimal or modest levels of expense? I lacked time and familiarity with the underlying data sources, sufficient to reconstruct their calculations in detail, so as to figure out where the two sources might have parted ways. My review of the methodology notes for EPI FBC and MIT LWC yielded the impression that these two sources used some of the same databases in similar ways. The MIT LWC data appeared somewhat newer, but that would not explain the sometimes considerable discrepancies in the two sources’ estimates. It did not appear that the methodology notes provided sufficient detail for a precise reconstruction.

As a general observation, in all of these cities, MIT LWC required between 59% and 77% of the amount required by EPI FBC. Narrowing it down somewhat, for more than 80% of these cities, the MIT LWC dollar value was 69% of the EPI FBC value, plus or minus 4%. In addition, eight of the 15 cities falling outside that nine-point range (i.e., 65-73% inclusive) were in the top or bottom 10% of these 77 cities, according to one or both of these sources. It would not be surprising if the formulas applicable to most cities worked somewhat differently in especially cheap or expensive places.

I could conclude that there was, for the most part, a rough correspondence between MIT LWC and EPI FBC: find the amount that is 69% of EPI FBC and, for 80% of cities, you will be within a few percentage points of MIT LWC. In the remaining 20%, half will be at the extremes of expensive or inexpensive, where maybe the usual calculations don’t quite apply, and the other half will be random instances due to peculiar local conditions, errors of calculation, or some other unknown factor(s).

Given the coarseness of that relationship, it seemed advisable to start with the data source most relevant to one’s particular need, and then use the other data source as a secondary reference. Within this post’s pursuit of a broad set of places that are at least minimally affordable, then, one approach would be to start with MIT LWC as a baseline, and then estimate that (in 80% of cases) EPI FBC — that is, a modest rather than minimal lifestyle — will cost roughly 45% more (i.e., 1/0.69). This rule of thumb would join an earlier one (above): increase the single-adult-no-children scenario by about 60% to get an estimate of costs for two adults in the household.

Obviously, it would not make sense to rely on MIT LWC, or any other source, if it was unreliable. In the course of exploring the data, I noticed several disagreements between EPI FBC and MIT LWC. I consulted C2ER and ERI for their views, and also got the median one-bedroom apartment (1br) rental rates from Zumper. In the process, more disparities emerged. My information and conclusions were as follows:

  1. EPI FBC said that Dallas TX and Danville IL cost about the same, for purposes of achieving a modest lifestyle. EPI FBC also said that Danville cost nearly $2,000 per year more than the nearby and seemingly comparable Terre Haute IN. In short, it appeared EPI FBC had made a mistake about Danville. ERI and C2ER put Danville far below Dallas, and MIT LWC said that a minimal lifestyle in Dallas would cost 18% ($3,069) more per year than in Danville. Zumper did not have an entry for Danville.
  2. EPI FBC also raised doubts in its claim that a modest lifestyle was about as expensive in Fort Wayne as in Colorado Springs. MIT LWC and C2ER rated Fort Wayne as more like half as expensive as the Springs. ERI said it was 77% as expensive. Zumper said a 1br averaged $480 in Fort Wayne but $790 in Colorado Springs.
  3. EPI FBC claimed that a modest lifestyle was more expensive in little Beckley WV than in Pittsburgh. ERI and MIT LWC agreed that, as one would expect, Pittsburgh was far more expensive. C2ER and Zumper offered no data on Beckley.
  4. EPI FBC said that Fayetteville NC was as expensive as Chicago. MIT LWC, C2ER, and ERI contended that Chicago was far more expensive. Zumper reported no data for Fayetteville.
  5. MIT LWC considered McAllen TX more expensive than Cincinnati. Again, Zumper had no data, but the other sources all disagreed.
  6. It was not clear what to conclude about Minneapolis vs. Newark. ERI said Newark was somewhat less expensive. Zumper supported that, pricing 1brs at $920 per month in Newark, and $1,350 in Minneapolis. But C2ER found Newark more expensive, and EPI FBC and MIT LWC found it much more expensive.
  7. MIT LWC considered Honolulu somewhat less expensive than Oakland. Zumper agreed: $1,780 for the average 1br in Honolulu and $2,070 in Oakland. ERI found Honolulu somewhat more expensive. EPI FBC and C2ER found it much more expensive than Oakland, but disagreed on its rank: 100 for EPI FBC, but only 72 for C2ER.

In cases 1-4, EPI FBC seemed to be wrong, according to its peers and also according to personal experience. In cases 6 and 7, the situation was unclear. Possibly there were discrepancies in how cities like Newark, Minneapolis, Honolulu, and/or Oakland were defined, or perhaps the calculations differed consistent with the various sources’ target audiences. In particular, the people behind MIT LWC might conclude that Newark is less expensive than Minneapolis for the average person, but is more expensive for purposes of those on a minimal budget.

In case 5, MIT LWC seemed to be wrong. But maybe not. Given the Rio Grande Valley’s reputation as one of the poorest parts of the nation, I could believe that its circumstances might be unusual, and not necessarily friendly to the poor. Nadeau (2017, p. 5) confirmed that the MIT LWC housing component included utility costs — apparently including air conditioning. As an exceptionally warm place, it was plausible that McAllen might have a higher CL than some otherwise similar alternatives.

EPI FBC was oriented toward a CL that might be on the high side, for people seeking the most affordable cities in the U.S. — and, in any case, the findings just discussed did not support the idea that EPI FBC was careful enough to serve as a primary source for CL information generally, nor that I should use it as a supplement for MIT LWC. Hence, I would be largely ignoring EPI FBC in the remainder of this post.

Narrowing the Scope of an Accurate Low CLI

The preceding section concluded that MIT LWC was the better of the two dollar-based sources. At the end of the section prior to that, ERI had stood up relatively well in comparisons against other non-dollar (i.e., indexed) sources. Now, there was a question of whether the best CLI, for purposes of finding low-cost places to live in the U.S., should combine these two, rather than using just MIT LWC.

MIT LWC seemed essential because it used dollar figures. That could provide the option of converting CLI values into real terms. Also, as noted earlier, stating values in dollars added a sense of reality to the relatively academic index values offered by other CL data sources. In addition, I liked MIT LWC for its relative transparency and its pedigree — for, that is, the open disclosure of its methods, and of its prestigious PhD founder, in contrast to the unknown and quite possibly inferior people and methods behind ERI’s data.

At this point, I decided to expand the list of cities in my dataset. Doing so would provide more data with which to resolve the question of whether to include ERI, and would also improve checks on internal consistency and offer better geographical coverage of the United States. The cities I chose were those for which I already had a value from at least one of the other sources named above. Someone, it seemed, had already decided that those cities were worth tracking, and in a few instances I would need to consult those other sources as to the best numbers for a particular city. This expansion gave me values from ERI and MIT LWC for a total of 288 cities.

It would have been helpful to have information about these additional cities earlier in the process, especially to inform my conclusions about EPI FBC. It was possible that further city comparisons would have made EPI FBC look better vis-à-vis MIT LWC. But I didn’t consider that likely. The errors by EPI FBC identified above were visible enough as to raise a fair question of quality control. Performing hundreds of additional lookups in MIT LWC and ERI, for these extra cities, was tedious enough without adding another couple hundred EPI FBC lookups to the chore. I also felt that, if MIT LWC was wrong regarding a particular city, ERI was probably at least as likely as EPI FBC to detect that error.

The comparisons in the previous section, reflecting poorly on EPI FBC, had mostly involved smaller cities. EPI FBC had looked especially bad, for present purposes, because those places were likely to be less expensive, and thus more relevant for people looking for low-CL places, where one would expect MIT LWC to shine. But now, as I turned to a more intensive comparison of MIT LWC against ERI, I saw that MIT LWC had some problems too, beyond the seeming error involving McAllen TX (above).

The most visible problems were in the larger cities. For instance, which was more accurate: to say that Oakland and San Jose were more expensive than Manhattan, and that Virginia Beach and Boulder were at least as pricey as Los Angeles, as MIT LWC did — or to say they were cheaper than Boston and at least as inexpensive as Denver, respectively, as ERI did?

Answering that question would require an authoritative third-party source of information. But I had no better CLI to draw upon. To the contrary, I was now attempting to choose between the two sources I had found most worthy. Who was going to help me figure out whether MIT LWC or ERI was more accurate, in places where they disagreed?

One option was to use a private market listing source focused on rental expense, figuring that rent was typically the biggest component of CL and would generally provide a fair sense of total CL in a particular city. As an example of this sort of source, I had already used Zumper for a few price comparisons (above). I chose Zumper because it provided averages for entire cities. Zumper boasted “over a million available apartments and homes every month,” but when spread out across an entire country, apparently that could mean a shortage of inventory in some markets.

As another possibility, the methodology notes for MIT LWC (and for some other sources, e.g., EPI FBC) said they based their housing estimates on the massive governmental database underlying Fair Market Rents (FMR) estimates prepared by the U.S. Department of Housing and Urban Development (HUD). A search led to sources indicating that the HUD FMR values on which MIT LWC (and some other data sources) based their housing estimates were supposed to be around the middle of the rent market. Easton (2015, pp. 9-10) found that HUD FMR was not an especially accurate measure of housing costs. Then again, I noticed that Easton used data from the 1990s. Data sources and methods of data collection and analysis may have changed a great deal since then. I also wondered whether Easton’s study was too general, for purposes of MIT LWC’s target population, and I had not explored the question of whether other scholars agreed with his methods or conclusions. (Unlike Zumper, HUD FMR did offer average prices, for an entire city or MSA, for the studio (i.e., efficiency) apartments that MIT LWC assumed would be used by a single adult living alone. But that seemed unimportant; in my experience, studio apartments were often hard to find, and were not always priced as one might expect. Since there was a more consistent market for 1brs, I stayed with comparisons based on 1brs.)

Regarding the big cities mentioned above, a comparison of Zumper against HUD FMR produced some substantial differences. Those differences were as follows (Zumper first, HUD FMR second): Oakland, $2,070 vs. 1,723; San Jose, $2,260 vs. $1,773; Los Angeles, $2,060 vs. $1,195; New York, $2,940 vs. $1,419; Virginia Beach, $940 vs. $939; Boston, $2,200 vs. $1,372; Denver, $1,210 vs. $1,031. In every case except Virginia Beach, Zumper significantly exceeded HUD FMR — by about 20-30% in several cases, but by 58-72% in three (counting Pittsburgh), and by 107% in New York. The HUD FMR values explained why MIT LWC found Oakland more expensive than New York, although not why it considered Virginia Beach more expensive than L.A.

Like the creator of MIT LWC, I could not simply dismiss the HUD FMR values, when for all I knew the Zumper values came largely from an atypically expensive subset of the housing market. But in light of the foregoing academic criticism of HUD FMR, neither could I simply embrace MIT LWC’s contrarian claims about relative prices among these cities. I did not believe Oakland was more expensive than New York.

One response would be that those big, expensive cities did not belong in a low CLI to begin with — that, as in my earlier comparisons of rent expense, I should have stopped paying attention to those cities as soon as I saw that they seemed to be in a different sort of rental market from most other American cities, with exceptionally high prices. At this point, however belatedly, I did decide to discard those especially expensive cities from the dataset on which I would construct my low CLI. Now that I had more cities in the list, it seemed clearer which cities should be excluded from the low CLI. For MIT LWC’s single adult, no children, it looked like the line should be drawn at the $22,000 mark. Above that, almost everything was in the Northeast Corridor, from Boston down to the D.C. suburbs of north Virginia; in southern CA and the greater San Francisco area; in the Pacific Northwest (Seattle; Portland-Vancouver; Alaska); in Chicago-Joliet; and in a few other scattered places (Burlington VT; Santa Rosa; Santa Cruz; Boulder; Honolulu).

That left 250 cities for which I had data from ERI and MIT LWC. Once again, I looked at the graphs and calculated the rates of change, going toward the most expensive cities in each of those two lists. I tagged for removal a dozen additional cities that appeared at the upper end of both the ERI and MIT LWC lists. These were mostly in or near the aforementioned expensive areas (e.g., New Haven, Hilo, Fairbanks), the South (e.g., Fort Lauderdale, Atlanta), and Colorado (e.g., Glenwood Springs).

Developing the Low-Cost Index (LCI)

After removing that second set of expensive cities, I was left with a list of 237 cities. On these, the two sources (i.e., MIT LWC and ERI) had some disagreements. I focused on nine cities on which the disagreements were especially extreme:

  • Toledo OHZumper said the average 1br rent of $460 in Toledo was at the bottom of its list. That was consistent with MIT LWC, which ranked Toledo 227 out of these 237 cities, many of which were not in Zumper’s relatively short list; it was not consistent with ERI’s ranking of Toledo as 59th from the top. HUD FMR (which covered only MSAs) estimated the cost of a 1br in the Toledo MSA at $559 — presumably reflecting, as in other MSA situations, the positive influence of more expensive suburbs. Thus, ERI’s indication that Toledo was as expensive as Fort Worth (HUD FMR 1br = $770) seemed quite mistaken.
  • Pittsburgh PA. ERI considered Pittsburgh to be almost exactly as expensive as Reno NV, whereas MIT LWC put it closer to Amarillo TX. According to Zumper, the prices were 1br = $1,040 in Pittsburgh and $700 in Reno (no value for Amarillo). According to HUD FMR, the prices were 1br = $657 in Pittsburgh, $706 in Reno, and $632 in Amarillo. HUD FMR would thus say that Amarillo was closer (i.e., that MIT LWC was right), but Reno was not terribly far off, whereas Zumper saw Pittsburgh and Reno as vastly different.
  • Gulfport MS. For the rest of these cities on which ERI and MIT LWC disagreed sharply, Zumper provided no data. HUD FMR said $673 for a 1br in Gulfport. That was not too far from the $732 for nearby Mobile, which MIT LWC rated about the same; it was rather distant from $801 for a 1br in Statesboro GA, which ERI rated the same. In other words, ERI’s valuation did not seem consistent with HUD FMR data.
  • Sierra Vista AZ. HUD FMR said 1br = $598, which seemed rather different from the $679 HUD posited for Fayetteville NC, ranked similarly by MIT LWC. For Sierra Vista, it seemed ERI was somewhat closer with its similar ranking of Greenville SC (HUD FMR 1br = $656).
  • Scranton PA. MIT LWC priced Scranton (HUD FMR 1br = $657) midway between Roanoke VA (HUD FMR 1br = $692) and Jonesboro AR (HUD FMR 1br = $607), but in reverse order; that is, MIT LWC considered Jonesboro the most expensive of the three. Regardless, ERI put Scranton on a par with Charlotte NC (HUD FMR 1br = $784). The HUD FMR values just quoted did not seem to support that. (Similarly Wilkes-Barre PA.)
  • Round Rock TX. Round Rock was in the Austin MSA (HUD FMR 1br = $968), which both ERI and MIT LWC ranked in the top quarter of these 237 cities — and yet ERI put Round Rock in the bottom 25%. That seemed erroneous. It did not appear that Round Rock was in a strange submarket: various sources estimated its 1brs at between roughly $950 and $1,200.
  • Conroe TX. Conroe was part of Houston, but that MSA was so sprawling (i.e., its housing so variegated) as to discourage me from using it. Sources estimated a range of $900 to $1,000 for a Conroe 1br, consistent with MIT LWC’s high ranking but not consistent with ERI’s low ranking.
  • Martinsville VA. Martinsville was not in an MSA, so there was no HUD FMR value; but RentJungle offered a few 1br listings in the range of $382-595, relatively consistent with the low rates of Roanoke, an hour to the north. That was consistent with ERI’s ranking of Martinsville as one of the least expensive cities in the country, but not with MIT LWC’s view that Martinsville was more expensive than Charlotte or Providence.
  • Cedar Rapids IA. HUD FMR said 1br = $575. That was consistent with the low rank given by MIT LWC. ERI seemed rather off, in ranking Cedar Rapids near Grand Rapids (HUD FMR 1br = $668).

So, assuming rent expense was a fair predictor of overall CL, and assuming these sources were accurate and comparable, by my count ERI was clearly wrong for six of those nine cities, and was closer to the truth for a seventh; ERI was clearly right for only one, and was also closer to being right for one. These results suggested that MIT LWC was the more reliable source. But they also suggested that MIT LWC did make mistakes. That was consistent with the earlier remarks about McAllen TX and about some of the most expensive big cities. This limited information suggested that MIT LWC might be less accurate toward the extremes of richest and poorest cities.

Those conclusions suggested that it might make sense to develop a CLI that would combine MIT LWC with ERI. Such an index would give dominant weight to MIT LWC, given its greater accuracy, but the index would also give some weight to ERI, so as to reduce the risk of extreme errors that would incorrectly mark a city as affordable, or unaffordable, when it really wasn’t.

I decided against combining MIT LWC and ERI for several reasons. One was that, while I had examined some of the most extreme differences between MIT LWC and ERI, there were plenty of other differences that were fairly significant. Attempting to bridge the two sources wasn’t going to nudge MIT LWC a bit off-center; it was going to create a hybrid that could be substantially less accurate than MIT LWC. Part of the problem seemed to be that MIT LWC and ERI followed different paths: they disagreed rather markedly on the graphed shapes of the curves presenting their respective values, and I wasn’t sure how to deal with that. The resulting hybrid would also complicate the desire for an index whose values could be readily converted into dollar amounts: there would no longer be the straightforward MIT LWC statement of annual costs of living. In addition, the task of updating or revising the index to accommodate new information could be undesirably complex.

It seemed the better approach would be to retain the MIT LWC data, correct it in cases where it erred, and base the index on the corrected dataset — to rely on manual correction, that is, rather than an automatically generated solution mindlessly incorporating ERI data where it probably didn’t belong. This manual calculation could continue to add cities, if additional data points seemed desirable in a particular area, or at a particular cost level; it could also trim more cities off the top, if the index continued to suffer from distortion there.

Having decided on that manual-correction approach, I went back to the full MIT LWC dataset and corrected the apparently erroneous dollar values for McAllen (and Harlingen), Martinsville, and Sierra Vista. To correct McAllen and Harlingen, I saw that EPI FBC and C2ER ranked them at the bottom; I saw that the dollar amounts for other cities at the bottom of the EPI FBC list averaged about $7,450 above the dollar amounts for those cities on the MIT LWC list; and I subtracted $7,450 from the EPI FBC values for McAllen and Harlingen. For Martinsville, there was no EPI FBC value, so I looked at the C2ER cities immediately above and below Martinsville on the C2ER list, and I used the average of the dollar amounts that MIT LWC had given those cities. For Sierra Vista, C2ER and ERI agreed that it was only slightly more expensive than Fond du Lac, so I used a slight increase on the MIT LWC dollar value for Fond du Lac. (For future reference, using techniques like those described in this paragraph, I also corrected ERI’s apparently erroneous index values for Toledo, Pittsburgh, Gulfport, Scranton, Wilkes-Barre, Round Rock, Conroe, and Cedar Rapids.)

Next, I eliminated the expensive cities. This time, without the need to attempt harmony with ERI, I was free to exclude cities that seemed too expensive to belong in a list of low-cost places. For this, I looked at cities high on the list, and at whether they were located in expensive places, and I looked for a round number as the cutoff. I settled on $21,000 as the dividing line. That left me with a core set of 231 cities — 227, after deleting Round Rock as redundant of Austin, Brownsville and Harlingen as redundant of McAllen, and Wilkes-Barre as duplicative of Scranton. These cities, and the dollar amounts given to them by MIT LWC, comprised the LCI. The next section lists those cities and amounts. Technically speaking, the actual LCI spans the range from $16,000 to $21,000 in annual expenses. That is a range of $5,000; hence, each index point, between zero and 100 (i.e., starting at $16,000), is worth $50.

The LCI: 227 Low-Cost Cities

As described in the preceding sections, the Low-Cost Index (LCI) is a list of 227 U.S. cities, in order of rising expense, based largely on the MIT Living Wage Calculator (LWC). The dollar amounts shown here are the estimated amounts needed to pay the bills (including apartment rent and taxes) for a very basic existence for a year, for a single adult with no children. As a very rough rule of thumb, a second adult adds 60% to the amounts shown here, and a child adds 100%. The LWC offers more precise estimates for a variety of household configurations. As the name suggests, this index ends with Austin; it does not include more expensive cities (e.g., New York).

  1. McAllen (TX) $16,388
  2. Roanoke (VA) $16,492
  3. Scranton (PA) $16,688
  4. Jonesboro (AR) $16,871
  5. Danville (IL) $16,935
  6. Muskogee (OK) $17,013
  7. Benton Harbor (MI) $17,023
  8. Mason City (IA) $17,055
  9. Decatur (IL) $17,103
  10. Hastings (NE) $17,146
  11. Toledo (OH) $17,167
  12. Terre Haute (IN) $17,184
  13. Decatur (AL) $17,210
  14. Beckley (WV) $17,252
  15. Dubuque (IA) $17,259
  16. Flint (MI) $17,275
  17. Hot Springs (AR) $17,317
  18. Texarkana (TX) $17,364
  19. Davenport (IA) $17,367
  20. Cedar Rapids (IA) $17,383
  21. Richmond (IN) $17,404
  22. Topeka (KS) $17,457
  23. Jefferson City (MO) $17,464
  24. Morristown (TN) $17,476
  25. Klamath Falls (OR) $17,507
  26. Moses Lake (WA) $17,513
  27. Findlay (OH) $17,515
  28. Springfield (MO) $17,537
  29. Wichita (KS) $17,552
  30. Jackson (TN) $17,558
  31. Pierre (SD) $17,569
  32. Hutchinson (KS) $17,589
  33. Spokane (WA) $17,592
  34. Peoria (IL) $17,631
  35. Enid (OK) $17,637
  36. Yakima (WA) $17,645
  37. Fond du Lac (WI) $17,653
  38. Sierra Vista (AZ) $17,660
  39. Eau Claire (WI) $17,665
  40. Waterloo (IA) $17,666
  41. Florence (AL) $17,678
  42. Wichita Falls (TX) $17,688
  43. Cleveland (TN) $17,692
  44. Green Bay (WI) $17,701
  45. Lincoln (NE) $17,703
  46. Ponca City (OK) $17,709
  47. Charleston (WV) $17,724
  48. Martinsville (VA) $17,761
  49. Joplin (MO) $17,765
  50. Las Cruces (NM) $17,776
  51. Burlington (IA) $17,799
  52. Elkhart (IN) $17,800
  53. Lake Charles (LA) $17,807
  54. Cedar City (UT) $17,815
  55. Knoxville (TN) $17,830
  56. Rome (GA) $17,848
  57. Kalamazoo (MI) $17,851
  58. Waco (TX) $17,862
  59. Fargo (ND) $17,875
  60. Dodge City (KS) $17,877
  61. Tulsa (OK) $17,884
  62. Fort Wayne (IN) $17,908
  63. Cleveland (OH) $17,935
  64. Tupelo (MS) $17,940
  65. Boise (ID) $17,953
  66. Greenville (SC) $17,958
  67. Statesboro (GA) $17,980
  68. Grand Rapids (MI) $17,982
  69. Marshfield (WI) $17,989
  70. Akron (OH) $18,007
  71. Lima (OH) $18,007
  72. Anderson (SC) $18,011
  73. Dayton (OH) $18,019
  74. South Bend (IN) $18,022
  75. Cincinnati (OH) $18,034
  76. Fayetteville (AR) $18,034
  77. Sherman (TX) $18,036
  78. Pittsburgh (PA) $18,067
  79. Amarillo (TX) $18,079
  80. Ames (IA) $18,087
  81. Conway (AR) $18,136
  82. Little Rock (AR) $18,136
  83. Bowling Green (KY) $18,147
  84. York (PA) $18,164
  85. Auburn (AL) $18,170
  86. Thomasville (NC) $18,173
  87. Bullhead City (AZ) $18,182
  88. Detroit (MI) $18,199
  89. Chattanooga (TN) $18,208
  90. Morgantown (WV) $18,213
  91. Springfield (IL) $18,219
  92. Lexington (KY) $18,236
  93. Bozeman (MT) $18,240
  94. Huntsville (AL) $18,254
  95. Jackson (MS) $18,259
  96. Rockford (IL) $18,267
  97. Omaha (NE) $18,270
  98. Dalton (GA) $18,271
  99. Grand Forks (ND) $18,283
  100. Cookeville (TN) $18,304
  101. Kennewick (WA) $18,305
  102. Augusta (GA) $18,307
  103. Syracuse (NY) $18,319
  104. Columbus (OH) $18,320
  105. Salem (OR) $18,323
  106. Greensboro (NC) $18,340
  107. Lafayette (LA) $18,363
  108. Champaign (IL) $18,375
  109. Lexington (VA) $18,395
  110. Columbia (SC) $18,410
  111. Louisville (KY) $18,430
  112. Hattiesburg (MS) $18,444
  113. Albany (GA) $18,448
  114. Valdosta (GA) $18,448
  115. Utica (NY) $18,451
  116. Oklahoma City (OK) $18,466
  117. Lubbock (TX) $18,495
  118. Tucson (AZ) $18,506
  119. Winston-Salem (NC) $18,524
  120. Edmond (OK) $18,525
  121. Norman (OK) $18,525
  122. Melbourne (FL) $18,532
  123. Danville City (VA) $18,539
  124. Evansville (IN) $18,544
  125. Salina (KS) $18,561
  126. Muncie (IN) $18,568
  127. Harrisburg (PA) $18,596
  128. El Paso (TX) $18,620
  129. Louis (MO) $18,629
  130. Eugene (OR) $18,635
  131. Indianapolis (IN) $18,640
  132. Rochester (NY) $18,640
  133. Kansas City (MO) $18,661
  134. Columbia (TN) $18,664
  135. Staunton (VA) $18,683
  136. Montgomery (AL) $18,734
  137. Laramie (WY) $18,738
  138. Daytona Beach (FL) $18,750
  139. Bismarck (ND) $18,779
  140. Alexandria (LA) $18,803
  141. Buffalo (NY) $18,847
  142. George (UT) $18,859
  143. Colorado Springs (CO) $18,883
  144. Monroe (LA) $18,899
  145. Des Moines (IA) $18,903
  146. Athens (GA) $18,904
  147. Lynchburg (VA) $18,935
  148. Asheville (NC) $18,938
  149. Palm Coast (FL) $18,940
  150. Temple (TX) $18,950
  151. Columbia (MO) $18,977
  152. Bellingham (WA) $18,989
  153. Reno (NV) $18,995
  154. Shreveport (LA) $19,011
  155. Columbus (GA) $19,036
  156. Mankato (MN) $19,103
  157. Birmingham (AL) $19,128
  158. Allentown (PA) $19,141
  159. San Antonio (TX) $19,164
  160. Seguin (TX) $19,164
  161. Memphis (TN) $19,172
  162. Tyler (TX) $19,176
  163. Vero Beach (FL) $19,192
  164. Lafayette (IN) $19,223
  165. Milwaukee (WI) $19,225
  166. Baton Rouge (LA) $19,229
  167. Twin Falls (ID) $19,230
  168. Phoenix (AZ) $19,274
  169. Albuquerque (NM) $19,276
  170. Pueblo (CO) $19,303
  171. Bakersfield (CA) $19,337
  172. Jacksonville (FL) $19,430
  173. Yuma (AZ) $19,442
  174. Salt Lake City (UT) $19,447
  175. Chapel Hill (NC) $19,477
  176. Durham (NC) $19,477
  177. Fort Worth (TX) $19,488
  178. Nacogdoches (TX) $19,500
  179. Stockton (CA) $19,517
  180. Madison (WI) $19,546
  181. Burlington (NC) $19,589
  182. Fayetteville (NC) $19,636
  183. Dare County (NC) $19,637
  184. Las Vegas (NV) $19,691
  185. Mount Vernon (WA) $19,697
  186. Raleigh (NC) $19,697
  187. Gainesville (FL) $19,720
  188. Harrisonburg (VA) $19,727
  189. Modesto (CA) $19,733
  190. Odessa (TX) $19,740
  191. Mobile (AL) $19,742
  192. Gulfport (MS) $19,777
  193. Nashville (TN) $19,803
  194. Savannah (GA) $19,804
  195. Charlotte (NC) $19,805
  196. Bloomington (IN) $19,838
  197. Providence (RI) $19,841
  198. Paul (MN) $19,883
  199. Wilmington (NC) $19,903
  200. Minneapolis (MN) $19,931
  201. Albany (NY) $19,999
  202. Allen (TX) $20,004
  203. Dallas (TX) $20,004
  204. Prescott (AZ) $20,042
  205. Fresno (CA) $20,069
  206. Tampa (FL) $20,092
  207. Dover (DE) $20,154
  208. Portland (ME) $20,191
  209. New Orleans (LA) $20,195
  210. Pittsfield (MA) $20,196
  211. Conroe (TX) $20,208
  212. Houston (TX) $20,208
  213. Cape Coral (FL) $20,248
  214. Fort Myers (FL) $20,248
  215. Tallahassee (FL) $20,317
  216. Manhattan (KS) $20,337
  217. Winchester (VA) $20,351
  218. Brazoria County (TX) $20,436
  219. Manchester (NH) $20,472
  220. Charlottesville (VA) $20,498
  221. Sarasota (FL) $20,608
  222. Tacoma (WA) $20,645
  223. Corpus Christi (TX) $20,739
  224. Olympia (WA) $20,753
  225. Charleston (SC) $20,754
  226. Sacramento (CA) $20,785
  227. Austin (TX) $20,880

 

Advertisements
This entry was posted in Uncategorized and tagged , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s