Skip to content

Commit c611e8a

Browse files
Fix internal links
1 parent 3ce0638 commit c611e8a

File tree

4 files changed

+4
-4
lines changed

4 files changed

+4
-4
lines changed

docs/guide/guidance.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ sociolinguists, and cultural anthropologists, as well as with members of the
2121
populations on which technology will be deployed.
2222

2323
A single model, for example, the toxicity model that we leverage in the
24-
[example colab](https://www.tensorflow.org/responsible_ai/fairness_indicators/tutorials/Fairness_Indicators_Example_Colab),
24+
[example colab](../../tutorials/Fairness_Indicators_Example_Colab),
2525
can be used in many different contexts. A toxicity model deployed on a website
2626
to filter offensive comments, for example, is a very different use case than the
2727
model being deployed in an example web UI where users can type in a sentence and

docs/tutorials/Fairness_Indicators_Example_Colab.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -605,7 +605,7 @@
605605
"source": [
606606
"With this particular dataset and task, systematically higher false positive and false negative rates for certain identities can lead to negative consequences. For example, in a content moderation system, a higher-than-overall false positive rate for a certain group can lead to those voices being silenced. Thus, it is important to regularly evaluate these types of criteria as you develop and improve models, and utilize tools such as Fairness Indicators, TFDV, and WIT to help illuminate potential problems. Once you've identified fairness issues, you can experiment with new data sources, data balancing, or other techniques to improve performance on underperforming groups.\n",
607607
"\n",
608-
"See [here](https://tensorflow.org/responsible_ai/fairness_indicators/guide/guidance) for more information and guidance on how to use Fairness Indicators.\n"
608+
"See [here](../../guide/guidance) for more information and guidance on how to use Fairness Indicators.\n"
609609
]
610610
},
611611
{

docs/tutorials/Fairness_Indicators_Pandas_Case_Study.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -386,7 +386,7 @@
386386
"## Conclusion\n",
387387
"Within this case study we imported a dataset into a Pandas DataFrame that we then analyzed with Fairness Indicators. Understanding the results of your model and underlying data is an important step in ensuring your model doesn't reflect harmful bias. In the context of this case study we examined the the LSAC dataset and how predictions from this data could be impacted by a students race. The concept of “what is unfair and what is fair have been introduced in multiple disciplines for well over 50 years, including in education, hiring, and machine learning.”\u003csup\u003e1\u003c/sup\u003e Fairness Indicator is a tool to help mitigate fairness concerns in your machine learning model.\n",
388388
"\n",
389-
"For more information on using Fairness Indicators and resources to learn more about fairness concerns see [here](https://www.tensorflow.org/responsible_ai/fairness_indicators/guide).\n",
389+
"For more information on using Fairness Indicators and resources to learn more about fairness concerns see [here](../../).\n",
390390
"\n",
391391
"---\n",
392392
"\n",

docs/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@
6767
"id": "-DQoReGDeN16"
6868
},
6969
"source": [
70-
"This notebook demonstrates an easy way to create and optimize constrained problems using the TFCO library. This method can be useful in improving models when we find that they’re not performing equally well across different slices of our data, which we can identify using [Fairness Indicators](https://www.tensorflow.org/responsible_ai/fairness_indicators/guide). The second of Google’s AI principles states that our technology should avoid creating or reinforcing unfair bias, and we believe this technique can help improve model fairness in some situations. In particular, this notebook will:\n",
70+
"This notebook demonstrates an easy way to create and optimize constrained problems using the TFCO library. This method can be useful in improving models when we find that they’re not performing equally well across different slices of our data, which we can identify using [Fairness Indicators](../../). The second of Google’s AI principles states that our technology should avoid creating or reinforcing unfair bias, and we believe this technique can help improve model fairness in some situations. In particular, this notebook will:\n",
7171
"\n",
7272
"\n",
7373
"* Train a simple, *unconstrained* neural network model to detect a person's smile in images using [`tf.keras`](https://www.tensorflow.org/guide/keras) and the large-scale CelebFaces Attributes ([CelebA](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html)) dataset.\n",

0 commit comments

Comments
 (0)